2026-02-19 02:26:47.077095 | Job console starting 2026-02-19 02:26:47.091494 | Updating git repos 2026-02-19 02:26:47.199629 | Cloning repos into workspace 2026-02-19 02:26:47.443812 | Restoring repo states 2026-02-19 02:26:47.472414 | Merging changes 2026-02-19 02:26:47.472438 | Checking out repos 2026-02-19 02:26:47.759397 | Preparing playbooks 2026-02-19 02:26:48.486208 | Running Ansible setup 2026-02-19 02:26:52.776402 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-19 02:26:53.588070 | 2026-02-19 02:26:53.588267 | PLAY [Base pre] 2026-02-19 02:26:53.605597 | 2026-02-19 02:26:53.605732 | TASK [Setup log path fact] 2026-02-19 02:26:53.636895 | orchestrator | ok 2026-02-19 02:26:53.655373 | 2026-02-19 02:26:53.655527 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-19 02:26:53.686587 | orchestrator | ok 2026-02-19 02:26:53.698706 | 2026-02-19 02:26:53.698818 | TASK [emit-job-header : Print job information] 2026-02-19 02:26:53.746672 | # Job Information 2026-02-19 02:26:53.747008 | Ansible Version: 2.16.14 2026-02-19 02:26:53.747074 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-19 02:26:53.747135 | Pipeline: periodic-midnight 2026-02-19 02:26:53.747194 | Executor: 521e9411259a 2026-02-19 02:26:53.747231 | Triggered by: https://github.com/osism/testbed 2026-02-19 02:26:53.747270 | Event ID: e3bbeb8bf1dc4b2ba6ac7069a9163685 2026-02-19 02:26:53.757373 | 2026-02-19 02:26:53.757514 | LOOP [emit-job-header : Print node information] 2026-02-19 02:26:53.893089 | orchestrator | ok: 2026-02-19 02:26:53.893425 | orchestrator | # Node Information 2026-02-19 02:26:53.893483 | orchestrator | Inventory Hostname: orchestrator 2026-02-19 02:26:53.893528 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-19 02:26:53.893566 | orchestrator | Username: zuul-testbed03 2026-02-19 02:26:53.893603 | orchestrator | Distro: Debian 12.13 2026-02-19 02:26:53.893643 | orchestrator | Provider: static-testbed 2026-02-19 02:26:53.893680 | orchestrator | Region: 2026-02-19 02:26:53.893717 | orchestrator | Label: testbed-orchestrator 2026-02-19 02:26:53.893752 | orchestrator | Product Name: OpenStack Nova 2026-02-19 02:26:53.893785 | orchestrator | Interface IP: 81.163.193.140 2026-02-19 02:26:53.922009 | 2026-02-19 02:26:53.922286 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-19 02:26:54.424441 | orchestrator -> localhost | changed 2026-02-19 02:26:54.439539 | 2026-02-19 02:26:54.439863 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-19 02:26:55.529109 | orchestrator -> localhost | changed 2026-02-19 02:26:55.555005 | 2026-02-19 02:26:55.555278 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-19 02:26:55.877052 | orchestrator -> localhost | ok 2026-02-19 02:26:55.893151 | 2026-02-19 02:26:55.893410 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-19 02:26:55.933029 | orchestrator | ok 2026-02-19 02:26:55.953735 | orchestrator | included: /var/lib/zuul/builds/e59f2d7123e8471ba930463cfd363772/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-19 02:26:55.962328 | 2026-02-19 02:26:55.962435 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-19 02:26:58.242229 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-19 02:26:58.242739 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e59f2d7123e8471ba930463cfd363772/work/e59f2d7123e8471ba930463cfd363772_id_rsa 2026-02-19 02:26:58.242892 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e59f2d7123e8471ba930463cfd363772/work/e59f2d7123e8471ba930463cfd363772_id_rsa.pub 2026-02-19 02:26:58.242975 | orchestrator -> localhost | The key fingerprint is: 2026-02-19 02:26:58.243046 | orchestrator -> localhost | SHA256:W073F3g1FfsNz8krGlI6GuE16BnYIvvO/G7Q2aTbgHo zuul-build-sshkey 2026-02-19 02:26:58.243112 | orchestrator -> localhost | The key's randomart image is: 2026-02-19 02:26:58.243220 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-19 02:26:58.243288 | orchestrator -> localhost | | .o| 2026-02-19 02:26:58.243351 | orchestrator -> localhost | | o| 2026-02-19 02:26:58.243410 | orchestrator -> localhost | | .o.| 2026-02-19 02:26:58.243468 | orchestrator -> localhost | | o .. o+*| 2026-02-19 02:26:58.243526 | orchestrator -> localhost | | . oo=Soo... ==| 2026-02-19 02:26:58.243591 | orchestrator -> localhost | | oo+==*+. .. o| 2026-02-19 02:26:58.243652 | orchestrator -> localhost | | .. .==+.. ....| 2026-02-19 02:26:58.243709 | orchestrator -> localhost | | .+E oo.o o .. | 2026-02-19 02:26:58.243767 | orchestrator -> localhost | | o=++ . | 2026-02-19 02:26:58.243824 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-19 02:26:58.243969 | orchestrator -> localhost | ok: Runtime: 0:00:01.759548 2026-02-19 02:26:58.259570 | 2026-02-19 02:26:58.259730 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-19 02:26:58.300237 | orchestrator | ok 2026-02-19 02:26:58.315533 | orchestrator | included: /var/lib/zuul/builds/e59f2d7123e8471ba930463cfd363772/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-19 02:26:58.329092 | 2026-02-19 02:26:58.329240 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-19 02:26:58.353865 | orchestrator | skipping: Conditional result was False 2026-02-19 02:26:58.362089 | 2026-02-19 02:26:58.362216 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-19 02:26:59.091186 | orchestrator | changed 2026-02-19 02:26:59.102152 | 2026-02-19 02:26:59.102342 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-19 02:26:59.443003 | orchestrator | ok 2026-02-19 02:26:59.451132 | 2026-02-19 02:26:59.451275 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-19 02:26:59.958875 | orchestrator | ok 2026-02-19 02:26:59.972999 | 2026-02-19 02:26:59.973216 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-19 02:27:00.514607 | orchestrator | ok 2026-02-19 02:27:00.521053 | 2026-02-19 02:27:00.521187 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-19 02:27:00.545074 | orchestrator | skipping: Conditional result was False 2026-02-19 02:27:00.552570 | 2026-02-19 02:27:00.552682 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-19 02:27:01.013556 | orchestrator -> localhost | changed 2026-02-19 02:27:01.042369 | 2026-02-19 02:27:01.042554 | TASK [add-build-sshkey : Add back temp key] 2026-02-19 02:27:01.443811 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e59f2d7123e8471ba930463cfd363772/work/e59f2d7123e8471ba930463cfd363772_id_rsa (zuul-build-sshkey) 2026-02-19 02:27:01.444413 | orchestrator -> localhost | ok: Runtime: 0:00:00.020483 2026-02-19 02:27:01.463134 | 2026-02-19 02:27:01.463448 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-19 02:27:01.911964 | orchestrator | ok 2026-02-19 02:27:01.921152 | 2026-02-19 02:27:01.921311 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-19 02:27:01.958214 | orchestrator | skipping: Conditional result was False 2026-02-19 02:27:02.018113 | 2026-02-19 02:27:02.018292 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-19 02:27:02.431787 | orchestrator | ok 2026-02-19 02:27:02.446758 | 2026-02-19 02:27:02.446946 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-19 02:27:02.494956 | orchestrator | ok 2026-02-19 02:27:02.505633 | 2026-02-19 02:27:02.505769 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-19 02:27:02.833925 | orchestrator -> localhost | ok 2026-02-19 02:27:02.841790 | 2026-02-19 02:27:02.841908 | TASK [validate-host : Collect information about the host] 2026-02-19 02:27:04.079791 | orchestrator | ok 2026-02-19 02:27:04.093596 | 2026-02-19 02:27:04.093735 | TASK [validate-host : Sanitize hostname] 2026-02-19 02:27:04.169879 | orchestrator | ok 2026-02-19 02:27:04.178992 | 2026-02-19 02:27:04.179153 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-19 02:27:04.785359 | orchestrator -> localhost | changed 2026-02-19 02:27:04.800313 | 2026-02-19 02:27:04.800474 | TASK [validate-host : Collect information about zuul worker] 2026-02-19 02:27:05.314434 | orchestrator | ok 2026-02-19 02:27:05.323127 | 2026-02-19 02:27:05.323324 | TASK [validate-host : Write out all zuul information for each host] 2026-02-19 02:27:05.887537 | orchestrator -> localhost | changed 2026-02-19 02:27:05.902154 | 2026-02-19 02:27:05.902303 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-19 02:27:06.293625 | orchestrator | ok 2026-02-19 02:27:06.302581 | 2026-02-19 02:27:06.302716 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-19 02:27:34.580250 | orchestrator | changed: 2026-02-19 02:27:34.580489 | orchestrator | .d..t...... src/ 2026-02-19 02:27:34.580524 | orchestrator | .d..t...... src/github.com/ 2026-02-19 02:27:34.580549 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-19 02:27:34.580571 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-19 02:27:34.580591 | orchestrator | RedHat.yml 2026-02-19 02:27:34.594923 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-19 02:27:34.594940 | orchestrator | RedHat.yml 2026-02-19 02:27:34.594992 | orchestrator | = 2.2.0"... 2026-02-19 02:27:46.259374 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-19 02:27:46.279186 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-02-19 02:27:46.779062 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-19 02:27:47.409926 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-19 02:27:47.476984 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-19 02:27:48.163624 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-19 02:27:48.600841 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-19 02:27:49.443485 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-19 02:27:49.443633 | orchestrator | 2026-02-19 02:27:49.443648 | orchestrator | Providers are signed by their developers. 2026-02-19 02:27:49.443660 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-19 02:27:49.443670 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-19 02:27:49.443683 | orchestrator | 2026-02-19 02:27:49.443689 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-19 02:27:49.443713 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-19 02:27:49.443722 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-19 02:27:49.443731 | orchestrator | you run "tofu init" in the future. 2026-02-19 02:27:49.443912 | orchestrator | 2026-02-19 02:27:49.443932 | orchestrator | OpenTofu has been successfully initialized! 2026-02-19 02:27:49.443945 | orchestrator | 2026-02-19 02:27:49.443959 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-19 02:27:49.443967 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-19 02:27:49.443982 | orchestrator | should now work. 2026-02-19 02:27:49.443989 | orchestrator | 2026-02-19 02:27:49.443995 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-19 02:27:49.444004 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-19 02:27:49.444014 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-19 02:27:49.619003 | orchestrator | Created and switched to workspace "ci"! 2026-02-19 02:27:49.619053 | orchestrator | 2026-02-19 02:27:49.619059 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-19 02:27:49.619065 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-19 02:27:49.619088 | orchestrator | for this configuration. 2026-02-19 02:27:49.780828 | orchestrator | ci.auto.tfvars 2026-02-19 02:27:49.786769 | orchestrator | default_custom.tf 2026-02-19 02:27:50.930960 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-19 02:27:51.427902 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-19 02:27:51.676693 | orchestrator | 2026-02-19 02:27:51.676795 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-19 02:27:51.676802 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-19 02:27:51.676807 | orchestrator | + create 2026-02-19 02:27:51.676812 | orchestrator | <= read (data resources) 2026-02-19 02:27:51.676817 | orchestrator | 2026-02-19 02:27:51.676822 | orchestrator | OpenTofu will perform the following actions: 2026-02-19 02:27:51.676838 | orchestrator | 2026-02-19 02:27:51.676843 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-19 02:27:51.676848 | orchestrator | # (config refers to values not yet known) 2026-02-19 02:27:51.676853 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-19 02:27:51.676857 | orchestrator | + checksum = (known after apply) 2026-02-19 02:27:51.676861 | orchestrator | + created_at = (known after apply) 2026-02-19 02:27:51.676866 | orchestrator | + file = (known after apply) 2026-02-19 02:27:51.676870 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.676920 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.676924 | orchestrator | + min_disk_gb = (known after apply) 2026-02-19 02:27:51.676929 | orchestrator | + min_ram_mb = (known after apply) 2026-02-19 02:27:51.676933 | orchestrator | + most_recent = true 2026-02-19 02:27:51.676936 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.676940 | orchestrator | + protected = (known after apply) 2026-02-19 02:27:51.676944 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.676951 | orchestrator | + schema = (known after apply) 2026-02-19 02:27:51.676955 | orchestrator | + size_bytes = (known after apply) 2026-02-19 02:27:51.676958 | orchestrator | + tags = (known after apply) 2026-02-19 02:27:51.676962 | orchestrator | + updated_at = (known after apply) 2026-02-19 02:27:51.676966 | orchestrator | } 2026-02-19 02:27:51.676970 | orchestrator | 2026-02-19 02:27:51.676974 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-19 02:27:51.676978 | orchestrator | # (config refers to values not yet known) 2026-02-19 02:27:51.676982 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-19 02:27:51.676986 | orchestrator | + checksum = (known after apply) 2026-02-19 02:27:51.676990 | orchestrator | + created_at = (known after apply) 2026-02-19 02:27:51.676994 | orchestrator | + file = (known after apply) 2026-02-19 02:27:51.676997 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677001 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677005 | orchestrator | + min_disk_gb = (known after apply) 2026-02-19 02:27:51.677008 | orchestrator | + min_ram_mb = (known after apply) 2026-02-19 02:27:51.677012 | orchestrator | + most_recent = true 2026-02-19 02:27:51.677016 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.677020 | orchestrator | + protected = (known after apply) 2026-02-19 02:27:51.677023 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677027 | orchestrator | + schema = (known after apply) 2026-02-19 02:27:51.677031 | orchestrator | + size_bytes = (known after apply) 2026-02-19 02:27:51.677035 | orchestrator | + tags = (known after apply) 2026-02-19 02:27:51.677039 | orchestrator | + updated_at = (known after apply) 2026-02-19 02:27:51.677042 | orchestrator | } 2026-02-19 02:27:51.677046 | orchestrator | 2026-02-19 02:27:51.677050 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-19 02:27:51.677054 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-19 02:27:51.677058 | orchestrator | + content = (known after apply) 2026-02-19 02:27:51.677063 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-19 02:27:51.677067 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-19 02:27:51.677070 | orchestrator | + content_md5 = (known after apply) 2026-02-19 02:27:51.677074 | orchestrator | + content_sha1 = (known after apply) 2026-02-19 02:27:51.677078 | orchestrator | + content_sha256 = (known after apply) 2026-02-19 02:27:51.677081 | orchestrator | + content_sha512 = (known after apply) 2026-02-19 02:27:51.677085 | orchestrator | + directory_permission = "0777" 2026-02-19 02:27:51.677089 | orchestrator | + file_permission = "0644" 2026-02-19 02:27:51.677093 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-19 02:27:51.677097 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677100 | orchestrator | } 2026-02-19 02:27:51.677107 | orchestrator | 2026-02-19 02:27:51.677111 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-19 02:27:51.677115 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-19 02:27:51.677118 | orchestrator | + content = (known after apply) 2026-02-19 02:27:51.677122 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-19 02:27:51.677126 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-19 02:27:51.677130 | orchestrator | + content_md5 = (known after apply) 2026-02-19 02:27:51.677133 | orchestrator | + content_sha1 = (known after apply) 2026-02-19 02:27:51.677137 | orchestrator | + content_sha256 = (known after apply) 2026-02-19 02:27:51.677154 | orchestrator | + content_sha512 = (known after apply) 2026-02-19 02:27:51.677157 | orchestrator | + directory_permission = "0777" 2026-02-19 02:27:51.677161 | orchestrator | + file_permission = "0644" 2026-02-19 02:27:51.677169 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-19 02:27:51.677173 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677177 | orchestrator | } 2026-02-19 02:27:51.677181 | orchestrator | 2026-02-19 02:27:51.677185 | orchestrator | # local_file.inventory will be created 2026-02-19 02:27:51.677188 | orchestrator | + resource "local_file" "inventory" { 2026-02-19 02:27:51.677192 | orchestrator | + content = (known after apply) 2026-02-19 02:27:51.677196 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-19 02:27:51.677200 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-19 02:27:51.677203 | orchestrator | + content_md5 = (known after apply) 2026-02-19 02:27:51.677207 | orchestrator | + content_sha1 = (known after apply) 2026-02-19 02:27:51.677211 | orchestrator | + content_sha256 = (known after apply) 2026-02-19 02:27:51.677215 | orchestrator | + content_sha512 = (known after apply) 2026-02-19 02:27:51.677219 | orchestrator | + directory_permission = "0777" 2026-02-19 02:27:51.677223 | orchestrator | + file_permission = "0644" 2026-02-19 02:27:51.677226 | orchestrator | + filename = "inventory.ci" 2026-02-19 02:27:51.677230 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677234 | orchestrator | } 2026-02-19 02:27:51.677238 | orchestrator | 2026-02-19 02:27:51.677241 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-19 02:27:51.677245 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-19 02:27:51.677249 | orchestrator | + content = (sensitive value) 2026-02-19 02:27:51.677253 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-19 02:27:51.677257 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-19 02:27:51.677260 | orchestrator | + content_md5 = (known after apply) 2026-02-19 02:27:51.677264 | orchestrator | + content_sha1 = (known after apply) 2026-02-19 02:27:51.677268 | orchestrator | + content_sha256 = (known after apply) 2026-02-19 02:27:51.677272 | orchestrator | + content_sha512 = (known after apply) 2026-02-19 02:27:51.677275 | orchestrator | + directory_permission = "0700" 2026-02-19 02:27:51.677279 | orchestrator | + file_permission = "0600" 2026-02-19 02:27:51.677283 | orchestrator | + filename = ".id_rsa.ci" 2026-02-19 02:27:51.677287 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677291 | orchestrator | } 2026-02-19 02:27:51.677294 | orchestrator | 2026-02-19 02:27:51.677298 | orchestrator | # null_resource.node_semaphore will be created 2026-02-19 02:27:51.677302 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-19 02:27:51.677306 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677310 | orchestrator | } 2026-02-19 02:27:51.677316 | orchestrator | 2026-02-19 02:27:51.677320 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-19 02:27:51.677339 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-19 02:27:51.677345 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677351 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677355 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677359 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.677362 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677367 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-19 02:27:51.677370 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677374 | orchestrator | + size = 80 2026-02-19 02:27:51.677378 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677381 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677385 | orchestrator | } 2026-02-19 02:27:51.677389 | orchestrator | 2026-02-19 02:27:51.677393 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-19 02:27:51.677396 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-19 02:27:51.677400 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677404 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677407 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677415 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.677419 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677423 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-19 02:27:51.677426 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677430 | orchestrator | + size = 80 2026-02-19 02:27:51.677434 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677438 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677441 | orchestrator | } 2026-02-19 02:27:51.677445 | orchestrator | 2026-02-19 02:27:51.677449 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-19 02:27:51.677452 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-19 02:27:51.677456 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677460 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677464 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677467 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.677471 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677475 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-19 02:27:51.677479 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677482 | orchestrator | + size = 80 2026-02-19 02:27:51.677486 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677490 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677493 | orchestrator | } 2026-02-19 02:27:51.677497 | orchestrator | 2026-02-19 02:27:51.677501 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-19 02:27:51.677505 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-19 02:27:51.677508 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677512 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677516 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677519 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.677523 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677527 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-19 02:27:51.677531 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677534 | orchestrator | + size = 80 2026-02-19 02:27:51.677541 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677545 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677549 | orchestrator | } 2026-02-19 02:27:51.677556 | orchestrator | 2026-02-19 02:27:51.677560 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-19 02:27:51.677563 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-19 02:27:51.677567 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677571 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677575 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677578 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.677582 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677586 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-19 02:27:51.677589 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677593 | orchestrator | + size = 80 2026-02-19 02:27:51.677597 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677601 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677604 | orchestrator | } 2026-02-19 02:27:51.677608 | orchestrator | 2026-02-19 02:27:51.677612 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-19 02:27:51.677616 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-19 02:27:51.677619 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677623 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677627 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677634 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.677638 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677642 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-19 02:27:51.677646 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677649 | orchestrator | + size = 80 2026-02-19 02:27:51.677653 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677657 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677661 | orchestrator | } 2026-02-19 02:27:51.677664 | orchestrator | 2026-02-19 02:27:51.677668 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-19 02:27:51.677672 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-19 02:27:51.677676 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677679 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677683 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677687 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.677691 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677694 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-19 02:27:51.677698 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677702 | orchestrator | + size = 80 2026-02-19 02:27:51.677705 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677709 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677713 | orchestrator | } 2026-02-19 02:27:51.677717 | orchestrator | 2026-02-19 02:27:51.677720 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-19 02:27:51.677724 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.677728 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677732 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677736 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677739 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677743 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-19 02:27:51.677747 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677751 | orchestrator | + size = 20 2026-02-19 02:27:51.677754 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677758 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677762 | orchestrator | } 2026-02-19 02:27:51.677768 | orchestrator | 2026-02-19 02:27:51.677772 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-19 02:27:51.677776 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.677780 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677783 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677787 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677791 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677795 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-19 02:27:51.677798 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677802 | orchestrator | + size = 20 2026-02-19 02:27:51.677806 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677810 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677813 | orchestrator | } 2026-02-19 02:27:51.677817 | orchestrator | 2026-02-19 02:27:51.677821 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-19 02:27:51.677824 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.677828 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677832 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677836 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677839 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677843 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-19 02:27:51.677847 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677854 | orchestrator | + size = 20 2026-02-19 02:27:51.677858 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677862 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677865 | orchestrator | } 2026-02-19 02:27:51.677869 | orchestrator | 2026-02-19 02:27:51.677873 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-19 02:27:51.677877 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.677880 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677884 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677888 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677895 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677899 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-19 02:27:51.677903 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677906 | orchestrator | + size = 20 2026-02-19 02:27:51.677910 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677914 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677917 | orchestrator | } 2026-02-19 02:27:51.677921 | orchestrator | 2026-02-19 02:27:51.677925 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-19 02:27:51.677929 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.677933 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677936 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677940 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677944 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677947 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-19 02:27:51.677951 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.677955 | orchestrator | + size = 20 2026-02-19 02:27:51.677959 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.677962 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.677966 | orchestrator | } 2026-02-19 02:27:51.677970 | orchestrator | 2026-02-19 02:27:51.677973 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-19 02:27:51.677977 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.677981 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.677985 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.677988 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.677992 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.677996 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-19 02:27:51.678000 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.678003 | orchestrator | + size = 20 2026-02-19 02:27:51.678007 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.678011 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.678037 | orchestrator | } 2026-02-19 02:27:51.678044 | orchestrator | 2026-02-19 02:27:51.678048 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-19 02:27:51.678052 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.678056 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.678060 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.678063 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.678067 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.678071 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-19 02:27:51.678075 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.678078 | orchestrator | + size = 20 2026-02-19 02:27:51.678082 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.678087 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.678090 | orchestrator | } 2026-02-19 02:27:51.678094 | orchestrator | 2026-02-19 02:27:51.678098 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-19 02:27:51.678102 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.678109 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.678113 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.678117 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.678121 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.678124 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-19 02:27:51.678128 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.678132 | orchestrator | + size = 20 2026-02-19 02:27:51.678136 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.678139 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.678143 | orchestrator | } 2026-02-19 02:27:51.678147 | orchestrator | 2026-02-19 02:27:51.678151 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-19 02:27:51.678155 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-19 02:27:51.678158 | orchestrator | + attachment = (known after apply) 2026-02-19 02:27:51.678162 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.678166 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.678169 | orchestrator | + metadata = (known after apply) 2026-02-19 02:27:51.678173 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-19 02:27:51.678177 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.678181 | orchestrator | + size = 20 2026-02-19 02:27:51.678184 | orchestrator | + volume_retype_policy = "never" 2026-02-19 02:27:51.678188 | orchestrator | + volume_type = "ssd" 2026-02-19 02:27:51.678192 | orchestrator | } 2026-02-19 02:27:51.678198 | orchestrator | 2026-02-19 02:27:51.678202 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-19 02:27:51.678206 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-19 02:27:51.678209 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-19 02:27:51.678213 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-19 02:27:51.678217 | orchestrator | + all_metadata = (known after apply) 2026-02-19 02:27:51.678220 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.678224 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.678228 | orchestrator | + config_drive = true 2026-02-19 02:27:51.678235 | orchestrator | + created = (known after apply) 2026-02-19 02:27:51.678239 | orchestrator | + flavor_id = (known after apply) 2026-02-19 02:27:51.678242 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-19 02:27:51.678246 | orchestrator | + force_delete = false 2026-02-19 02:27:51.678252 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-19 02:27:51.678258 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.678264 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.678270 | orchestrator | + image_name = (known after apply) 2026-02-19 02:27:51.678275 | orchestrator | + key_pair = "testbed" 2026-02-19 02:27:51.678281 | orchestrator | + name = "testbed-manager" 2026-02-19 02:27:51.678287 | orchestrator | + power_state = "active" 2026-02-19 02:27:51.678293 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.678298 | orchestrator | + security_groups = (known after apply) 2026-02-19 02:27:51.678304 | orchestrator | + stop_before_destroy = false 2026-02-19 02:27:51.678309 | orchestrator | + updated = (known after apply) 2026-02-19 02:27:51.678315 | orchestrator | + user_data = (sensitive value) 2026-02-19 02:27:51.678336 | orchestrator | 2026-02-19 02:27:51.678342 | orchestrator | + block_device { 2026-02-19 02:27:51.678345 | orchestrator | + boot_index = 0 2026-02-19 02:27:51.678349 | orchestrator | + delete_on_termination = false 2026-02-19 02:27:51.678353 | orchestrator | + destination_type = "volume" 2026-02-19 02:27:51.678357 | orchestrator | + multiattach = false 2026-02-19 02:27:51.678360 | orchestrator | + source_type = "volume" 2026-02-19 02:27:51.678364 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.678374 | orchestrator | } 2026-02-19 02:27:51.678378 | orchestrator | 2026-02-19 02:27:51.678382 | orchestrator | + network { 2026-02-19 02:27:51.678385 | orchestrator | + access_network = false 2026-02-19 02:27:51.678389 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-19 02:27:51.678393 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-19 02:27:51.678396 | orchestrator | + mac = (known after apply) 2026-02-19 02:27:51.678400 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.678404 | orchestrator | + port = (known after apply) 2026-02-19 02:27:51.678408 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.678411 | orchestrator | } 2026-02-19 02:27:51.678415 | orchestrator | } 2026-02-19 02:27:51.678422 | orchestrator | 2026-02-19 02:27:51.678426 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-19 02:27:51.678430 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-19 02:27:51.678434 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-19 02:27:51.678437 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-19 02:27:51.678441 | orchestrator | + all_metadata = (known after apply) 2026-02-19 02:27:51.678445 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.678448 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.678453 | orchestrator | + config_drive = true 2026-02-19 02:27:51.678459 | orchestrator | + created = (known after apply) 2026-02-19 02:27:51.678464 | orchestrator | + flavor_id = (known after apply) 2026-02-19 02:27:51.678474 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-19 02:27:51.678481 | orchestrator | + force_delete = false 2026-02-19 02:27:51.678488 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-19 02:27:51.678494 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.678500 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.678506 | orchestrator | + image_name = (known after apply) 2026-02-19 02:27:51.678511 | orchestrator | + key_pair = "testbed" 2026-02-19 02:27:51.678517 | orchestrator | + name = "testbed-node-0" 2026-02-19 02:27:51.678523 | orchestrator | + power_state = "active" 2026-02-19 02:27:51.678528 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.678533 | orchestrator | + security_groups = (known after apply) 2026-02-19 02:27:51.678539 | orchestrator | + stop_before_destroy = false 2026-02-19 02:27:51.678545 | orchestrator | + updated = (known after apply) 2026-02-19 02:27:51.678550 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-19 02:27:51.678556 | orchestrator | 2026-02-19 02:27:51.678563 | orchestrator | + block_device { 2026-02-19 02:27:51.678569 | orchestrator | + boot_index = 0 2026-02-19 02:27:51.678574 | orchestrator | + delete_on_termination = false 2026-02-19 02:27:51.678581 | orchestrator | + destination_type = "volume" 2026-02-19 02:27:51.678587 | orchestrator | + multiattach = false 2026-02-19 02:27:51.678593 | orchestrator | + source_type = "volume" 2026-02-19 02:27:51.678598 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.678604 | orchestrator | } 2026-02-19 02:27:51.678610 | orchestrator | 2026-02-19 02:27:51.678616 | orchestrator | + network { 2026-02-19 02:27:51.678621 | orchestrator | + access_network = false 2026-02-19 02:27:51.678627 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-19 02:27:51.678632 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-19 02:27:51.678637 | orchestrator | + mac = (known after apply) 2026-02-19 02:27:51.678644 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.678650 | orchestrator | + port = (known after apply) 2026-02-19 02:27:51.678656 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.678662 | orchestrator | } 2026-02-19 02:27:51.678667 | orchestrator | } 2026-02-19 02:27:51.678678 | orchestrator | 2026-02-19 02:27:51.678684 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-19 02:27:51.678690 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-19 02:27:51.678696 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-19 02:27:51.678710 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-19 02:27:51.678717 | orchestrator | + all_metadata = (known after apply) 2026-02-19 02:27:51.678720 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.678724 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.678728 | orchestrator | + config_drive = true 2026-02-19 02:27:51.678731 | orchestrator | + created = (known after apply) 2026-02-19 02:27:51.678735 | orchestrator | + flavor_id = (known after apply) 2026-02-19 02:27:51.678739 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-19 02:27:51.678742 | orchestrator | + force_delete = false 2026-02-19 02:27:51.678746 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-19 02:27:51.678750 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.678753 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.678757 | orchestrator | + image_name = (known after apply) 2026-02-19 02:27:51.678761 | orchestrator | + key_pair = "testbed" 2026-02-19 02:27:51.678764 | orchestrator | + name = "testbed-node-1" 2026-02-19 02:27:51.678768 | orchestrator | + power_state = "active" 2026-02-19 02:27:51.678772 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.678775 | orchestrator | + security_groups = (known after apply) 2026-02-19 02:27:51.678779 | orchestrator | + stop_before_destroy = false 2026-02-19 02:27:51.678783 | orchestrator | + updated = (known after apply) 2026-02-19 02:27:51.678791 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-19 02:27:51.678795 | orchestrator | 2026-02-19 02:27:51.678799 | orchestrator | + block_device { 2026-02-19 02:27:51.678803 | orchestrator | + boot_index = 0 2026-02-19 02:27:51.678806 | orchestrator | + delete_on_termination = false 2026-02-19 02:27:51.678810 | orchestrator | + destination_type = "volume" 2026-02-19 02:27:51.678814 | orchestrator | + multiattach = false 2026-02-19 02:27:51.678817 | orchestrator | + source_type = "volume" 2026-02-19 02:27:51.678821 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.678825 | orchestrator | } 2026-02-19 02:27:51.678828 | orchestrator | 2026-02-19 02:27:51.678832 | orchestrator | + network { 2026-02-19 02:27:51.678836 | orchestrator | + access_network = false 2026-02-19 02:27:51.678839 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-19 02:27:51.678843 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-19 02:27:51.678847 | orchestrator | + mac = (known after apply) 2026-02-19 02:27:51.678851 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.678854 | orchestrator | + port = (known after apply) 2026-02-19 02:27:51.678858 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.678862 | orchestrator | } 2026-02-19 02:27:51.678865 | orchestrator | } 2026-02-19 02:27:51.678871 | orchestrator | 2026-02-19 02:27:51.678875 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-19 02:27:51.678879 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-19 02:27:51.678883 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-19 02:27:51.678886 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-19 02:27:51.678891 | orchestrator | + all_metadata = (known after apply) 2026-02-19 02:27:51.678894 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.678898 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.678902 | orchestrator | + config_drive = true 2026-02-19 02:27:51.678905 | orchestrator | + created = (known after apply) 2026-02-19 02:27:51.678909 | orchestrator | + flavor_id = (known after apply) 2026-02-19 02:27:51.678913 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-19 02:27:51.678916 | orchestrator | + force_delete = false 2026-02-19 02:27:51.678920 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-19 02:27:51.678924 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.678927 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.678936 | orchestrator | + image_name = (known after apply) 2026-02-19 02:27:51.678940 | orchestrator | + key_pair = "testbed" 2026-02-19 02:27:51.678944 | orchestrator | + name = "testbed-node-2" 2026-02-19 02:27:51.678947 | orchestrator | + power_state = "active" 2026-02-19 02:27:51.678951 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.678955 | orchestrator | + security_groups = (known after apply) 2026-02-19 02:27:51.678958 | orchestrator | + stop_before_destroy = false 2026-02-19 02:27:51.678962 | orchestrator | + updated = (known after apply) 2026-02-19 02:27:51.678965 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-19 02:27:51.678969 | orchestrator | 2026-02-19 02:27:51.678973 | orchestrator | + block_device { 2026-02-19 02:27:51.678977 | orchestrator | + boot_index = 0 2026-02-19 02:27:51.678980 | orchestrator | + delete_on_termination = false 2026-02-19 02:27:51.678984 | orchestrator | + destination_type = "volume" 2026-02-19 02:27:51.678987 | orchestrator | + multiattach = false 2026-02-19 02:27:51.678991 | orchestrator | + source_type = "volume" 2026-02-19 02:27:51.678995 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.678998 | orchestrator | } 2026-02-19 02:27:51.679002 | orchestrator | 2026-02-19 02:27:51.679006 | orchestrator | + network { 2026-02-19 02:27:51.679009 | orchestrator | + access_network = false 2026-02-19 02:27:51.679013 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-19 02:27:51.679017 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-19 02:27:51.679020 | orchestrator | + mac = (known after apply) 2026-02-19 02:27:51.679024 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.679028 | orchestrator | + port = (known after apply) 2026-02-19 02:27:51.679031 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.679035 | orchestrator | } 2026-02-19 02:27:51.679039 | orchestrator | } 2026-02-19 02:27:51.679044 | orchestrator | 2026-02-19 02:27:51.679057 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-19 02:27:51.679060 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-19 02:27:51.679064 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-19 02:27:51.679068 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-19 02:27:51.679071 | orchestrator | + all_metadata = (known after apply) 2026-02-19 02:27:51.679075 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.679079 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.679082 | orchestrator | + config_drive = true 2026-02-19 02:27:51.679086 | orchestrator | + created = (known after apply) 2026-02-19 02:27:51.679090 | orchestrator | + flavor_id = (known after apply) 2026-02-19 02:27:51.679093 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-19 02:27:51.679097 | orchestrator | + force_delete = false 2026-02-19 02:27:51.679100 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-19 02:27:51.679104 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679108 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.679111 | orchestrator | + image_name = (known after apply) 2026-02-19 02:27:51.679115 | orchestrator | + key_pair = "testbed" 2026-02-19 02:27:51.679119 | orchestrator | + name = "testbed-node-3" 2026-02-19 02:27:51.679122 | orchestrator | + power_state = "active" 2026-02-19 02:27:51.679126 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679129 | orchestrator | + security_groups = (known after apply) 2026-02-19 02:27:51.679133 | orchestrator | + stop_before_destroy = false 2026-02-19 02:27:51.679137 | orchestrator | + updated = (known after apply) 2026-02-19 02:27:51.679140 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-19 02:27:51.679144 | orchestrator | 2026-02-19 02:27:51.679148 | orchestrator | + block_device { 2026-02-19 02:27:51.679151 | orchestrator | + boot_index = 0 2026-02-19 02:27:51.679155 | orchestrator | + delete_on_termination = false 2026-02-19 02:27:51.679159 | orchestrator | + destination_type = "volume" 2026-02-19 02:27:51.679167 | orchestrator | + multiattach = false 2026-02-19 02:27:51.679170 | orchestrator | + source_type = "volume" 2026-02-19 02:27:51.679174 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.679178 | orchestrator | } 2026-02-19 02:27:51.679181 | orchestrator | 2026-02-19 02:27:51.679185 | orchestrator | + network { 2026-02-19 02:27:51.679189 | orchestrator | + access_network = false 2026-02-19 02:27:51.679192 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-19 02:27:51.679196 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-19 02:27:51.679200 | orchestrator | + mac = (known after apply) 2026-02-19 02:27:51.679203 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.679207 | orchestrator | + port = (known after apply) 2026-02-19 02:27:51.679211 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.679214 | orchestrator | } 2026-02-19 02:27:51.679218 | orchestrator | } 2026-02-19 02:27:51.679224 | orchestrator | 2026-02-19 02:27:51.679228 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-19 02:27:51.679232 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-19 02:27:51.679235 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-19 02:27:51.679239 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-19 02:27:51.679243 | orchestrator | + all_metadata = (known after apply) 2026-02-19 02:27:51.679246 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.679250 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.679254 | orchestrator | + config_drive = true 2026-02-19 02:27:51.679257 | orchestrator | + created = (known after apply) 2026-02-19 02:27:51.679261 | orchestrator | + flavor_id = (known after apply) 2026-02-19 02:27:51.679264 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-19 02:27:51.679268 | orchestrator | + force_delete = false 2026-02-19 02:27:51.679272 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-19 02:27:51.679275 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679279 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.679283 | orchestrator | + image_name = (known after apply) 2026-02-19 02:27:51.679286 | orchestrator | + key_pair = "testbed" 2026-02-19 02:27:51.679290 | orchestrator | + name = "testbed-node-4" 2026-02-19 02:27:51.679293 | orchestrator | + power_state = "active" 2026-02-19 02:27:51.679297 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679301 | orchestrator | + security_groups = (known after apply) 2026-02-19 02:27:51.679304 | orchestrator | + stop_before_destroy = false 2026-02-19 02:27:51.679308 | orchestrator | + updated = (known after apply) 2026-02-19 02:27:51.679312 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-19 02:27:51.679315 | orchestrator | 2026-02-19 02:27:51.679319 | orchestrator | + block_device { 2026-02-19 02:27:51.679356 | orchestrator | + boot_index = 0 2026-02-19 02:27:51.679360 | orchestrator | + delete_on_termination = false 2026-02-19 02:27:51.679364 | orchestrator | + destination_type = "volume" 2026-02-19 02:27:51.679368 | orchestrator | + multiattach = false 2026-02-19 02:27:51.679371 | orchestrator | + source_type = "volume" 2026-02-19 02:27:51.679375 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.679379 | orchestrator | } 2026-02-19 02:27:51.679382 | orchestrator | 2026-02-19 02:27:51.679386 | orchestrator | + network { 2026-02-19 02:27:51.679390 | orchestrator | + access_network = false 2026-02-19 02:27:51.679394 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-19 02:27:51.679397 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-19 02:27:51.679401 | orchestrator | + mac = (known after apply) 2026-02-19 02:27:51.679405 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.679408 | orchestrator | + port = (known after apply) 2026-02-19 02:27:51.679412 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.679416 | orchestrator | } 2026-02-19 02:27:51.679420 | orchestrator | } 2026-02-19 02:27:51.679430 | orchestrator | 2026-02-19 02:27:51.679434 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-19 02:27:51.679438 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-19 02:27:51.679442 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-19 02:27:51.679445 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-19 02:27:51.679449 | orchestrator | + all_metadata = (known after apply) 2026-02-19 02:27:51.679453 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.679456 | orchestrator | + availability_zone = "nova" 2026-02-19 02:27:51.679460 | orchestrator | + config_drive = true 2026-02-19 02:27:51.679464 | orchestrator | + created = (known after apply) 2026-02-19 02:27:51.679468 | orchestrator | + flavor_id = (known after apply) 2026-02-19 02:27:51.679471 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-19 02:27:51.679475 | orchestrator | + force_delete = false 2026-02-19 02:27:51.679479 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-19 02:27:51.679482 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679486 | orchestrator | + image_id = (known after apply) 2026-02-19 02:27:51.679490 | orchestrator | + image_name = (known after apply) 2026-02-19 02:27:51.679493 | orchestrator | + key_pair = "testbed" 2026-02-19 02:27:51.679497 | orchestrator | + name = "testbed-node-5" 2026-02-19 02:27:51.679501 | orchestrator | + power_state = "active" 2026-02-19 02:27:51.679504 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679508 | orchestrator | + security_groups = (known after apply) 2026-02-19 02:27:51.679512 | orchestrator | + stop_before_destroy = false 2026-02-19 02:27:51.679515 | orchestrator | + updated = (known after apply) 2026-02-19 02:27:51.679519 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-19 02:27:51.679523 | orchestrator | 2026-02-19 02:27:51.679527 | orchestrator | + block_device { 2026-02-19 02:27:51.679530 | orchestrator | + boot_index = 0 2026-02-19 02:27:51.679534 | orchestrator | + delete_on_termination = false 2026-02-19 02:27:51.679538 | orchestrator | + destination_type = "volume" 2026-02-19 02:27:51.679541 | orchestrator | + multiattach = false 2026-02-19 02:27:51.679545 | orchestrator | + source_type = "volume" 2026-02-19 02:27:51.679549 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.679552 | orchestrator | } 2026-02-19 02:27:51.679556 | orchestrator | 2026-02-19 02:27:51.679560 | orchestrator | + network { 2026-02-19 02:27:51.679563 | orchestrator | + access_network = false 2026-02-19 02:27:51.679567 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-19 02:27:51.679571 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-19 02:27:51.679575 | orchestrator | + mac = (known after apply) 2026-02-19 02:27:51.679578 | orchestrator | + name = (known after apply) 2026-02-19 02:27:51.679582 | orchestrator | + port = (known after apply) 2026-02-19 02:27:51.679586 | orchestrator | + uuid = (known after apply) 2026-02-19 02:27:51.679589 | orchestrator | } 2026-02-19 02:27:51.679593 | orchestrator | } 2026-02-19 02:27:51.679597 | orchestrator | 2026-02-19 02:27:51.679601 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-19 02:27:51.679605 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-19 02:27:51.679608 | orchestrator | + fingerprint = (known after apply) 2026-02-19 02:27:51.679612 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679616 | orchestrator | + name = "testbed" 2026-02-19 02:27:51.679619 | orchestrator | + private_key = (sensitive value) 2026-02-19 02:27:51.679623 | orchestrator | + public_key = (known after apply) 2026-02-19 02:27:51.679627 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679630 | orchestrator | + user_id = (known after apply) 2026-02-19 02:27:51.679634 | orchestrator | } 2026-02-19 02:27:51.679638 | orchestrator | 2026-02-19 02:27:51.679642 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-19 02:27:51.679646 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.679655 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.679662 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679668 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.679677 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679690 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.679696 | orchestrator | } 2026-02-19 02:27:51.679703 | orchestrator | 2026-02-19 02:27:51.679709 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-19 02:27:51.679715 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.679721 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.679728 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679734 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.679741 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679747 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.679753 | orchestrator | } 2026-02-19 02:27:51.679764 | orchestrator | 2026-02-19 02:27:51.679770 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-19 02:27:51.679777 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.679783 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.679789 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679795 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.679801 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679807 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.679814 | orchestrator | } 2026-02-19 02:27:51.679820 | orchestrator | 2026-02-19 02:27:51.679824 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-19 02:27:51.679828 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.679832 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.679835 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679839 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.679843 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679846 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.679850 | orchestrator | } 2026-02-19 02:27:51.679854 | orchestrator | 2026-02-19 02:27:51.679858 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-19 02:27:51.679861 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.679865 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.679869 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679872 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.679876 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679880 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.679883 | orchestrator | } 2026-02-19 02:27:51.679887 | orchestrator | 2026-02-19 02:27:51.679891 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-19 02:27:51.679894 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.679898 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.679902 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679905 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.679909 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679913 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.679916 | orchestrator | } 2026-02-19 02:27:51.679920 | orchestrator | 2026-02-19 02:27:51.679924 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-19 02:27:51.679927 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.679931 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.679935 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679939 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.679942 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679953 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.679956 | orchestrator | } 2026-02-19 02:27:51.679960 | orchestrator | 2026-02-19 02:27:51.679964 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-19 02:27:51.679968 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.679971 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.679975 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.679979 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.679982 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.679986 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.679990 | orchestrator | } 2026-02-19 02:27:51.679994 | orchestrator | 2026-02-19 02:27:51.679997 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-19 02:27:51.680001 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-19 02:27:51.680005 | orchestrator | + device = (known after apply) 2026-02-19 02:27:51.680008 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680012 | orchestrator | + instance_id = (known after apply) 2026-02-19 02:27:51.680016 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680019 | orchestrator | + volume_id = (known after apply) 2026-02-19 02:27:51.680023 | orchestrator | } 2026-02-19 02:27:51.680027 | orchestrator | 2026-02-19 02:27:51.680031 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-19 02:27:51.680036 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-19 02:27:51.680040 | orchestrator | + fixed_ip = (known after apply) 2026-02-19 02:27:51.680043 | orchestrator | + floating_ip = (known after apply) 2026-02-19 02:27:51.680047 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680051 | orchestrator | + port_id = (known after apply) 2026-02-19 02:27:51.680054 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680058 | orchestrator | } 2026-02-19 02:27:51.680062 | orchestrator | 2026-02-19 02:27:51.680066 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-19 02:27:51.680070 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-19 02:27:51.680073 | orchestrator | + address = (known after apply) 2026-02-19 02:27:51.680077 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.680085 | orchestrator | + dns_domain = (known after apply) 2026-02-19 02:27:51.680089 | orchestrator | + dns_name = (known after apply) 2026-02-19 02:27:51.680092 | orchestrator | + fixed_ip = (known after apply) 2026-02-19 02:27:51.680096 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680100 | orchestrator | + pool = "public" 2026-02-19 02:27:51.680103 | orchestrator | + port_id = (known after apply) 2026-02-19 02:27:51.680107 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680111 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.680115 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.680118 | orchestrator | } 2026-02-19 02:27:51.680125 | orchestrator | 2026-02-19 02:27:51.680129 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-19 02:27:51.680133 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-19 02:27:51.680136 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.680140 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.680144 | orchestrator | + availability_zone_hints = [ 2026-02-19 02:27:51.680147 | orchestrator | + "nova", 2026-02-19 02:27:51.680151 | orchestrator | ] 2026-02-19 02:27:51.680155 | orchestrator | + dns_domain = (known after apply) 2026-02-19 02:27:51.680159 | orchestrator | + external = (known after apply) 2026-02-19 02:27:51.680162 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680166 | orchestrator | + mtu = (known after apply) 2026-02-19 02:27:51.680170 | orchestrator | + name = "net-testbed-management" 2026-02-19 02:27:51.680174 | orchestrator | + port_security_enabled = (known after apply) 2026-02-19 02:27:51.680181 | orchestrator | + qos_policy_id = (known after apply) 2026-02-19 02:27:51.680185 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680189 | orchestrator | + shared = (known after apply) 2026-02-19 02:27:51.680192 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.680196 | orchestrator | + transparent_vlan = (known after apply) 2026-02-19 02:27:51.680200 | orchestrator | 2026-02-19 02:27:51.680203 | orchestrator | + segments (known after apply) 2026-02-19 02:27:51.680207 | orchestrator | } 2026-02-19 02:27:51.680211 | orchestrator | 2026-02-19 02:27:51.680215 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-19 02:27:51.680218 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-19 02:27:51.680222 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.680226 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-19 02:27:51.680229 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-19 02:27:51.680233 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.680237 | orchestrator | + device_id = (known after apply) 2026-02-19 02:27:51.680240 | orchestrator | + device_owner = (known after apply) 2026-02-19 02:27:51.680246 | orchestrator | + dns_assignment = (known after apply) 2026-02-19 02:27:51.680252 | orchestrator | + dns_name = (known after apply) 2026-02-19 02:27:51.680257 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680262 | orchestrator | + mac_address = (known after apply) 2026-02-19 02:27:51.680268 | orchestrator | + network_id = (known after apply) 2026-02-19 02:27:51.680273 | orchestrator | + port_security_enabled = (known after apply) 2026-02-19 02:27:51.680279 | orchestrator | + qos_policy_id = (known after apply) 2026-02-19 02:27:51.680284 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680290 | orchestrator | + security_group_ids = (known after apply) 2026-02-19 02:27:51.680298 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.680302 | orchestrator | 2026-02-19 02:27:51.680306 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680309 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-19 02:27:51.680313 | orchestrator | } 2026-02-19 02:27:51.680317 | orchestrator | 2026-02-19 02:27:51.680334 | orchestrator | + binding (known after apply) 2026-02-19 02:27:51.680338 | orchestrator | 2026-02-19 02:27:51.680342 | orchestrator | + fixed_ip { 2026-02-19 02:27:51.680345 | orchestrator | + ip_address = "192.168.16.5" 2026-02-19 02:27:51.680349 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.680353 | orchestrator | } 2026-02-19 02:27:51.680357 | orchestrator | } 2026-02-19 02:27:51.680360 | orchestrator | 2026-02-19 02:27:51.680364 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-19 02:27:51.680368 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-19 02:27:51.680371 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.680375 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-19 02:27:51.680379 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-19 02:27:51.680382 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.680386 | orchestrator | + device_id = (known after apply) 2026-02-19 02:27:51.680390 | orchestrator | + device_owner = (known after apply) 2026-02-19 02:27:51.680393 | orchestrator | + dns_assignment = (known after apply) 2026-02-19 02:27:51.680397 | orchestrator | + dns_name = (known after apply) 2026-02-19 02:27:51.680401 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680405 | orchestrator | + mac_address = (known after apply) 2026-02-19 02:27:51.680408 | orchestrator | + network_id = (known after apply) 2026-02-19 02:27:51.680412 | orchestrator | + port_security_enabled = (known after apply) 2026-02-19 02:27:51.680416 | orchestrator | + qos_policy_id = (known after apply) 2026-02-19 02:27:51.680419 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680427 | orchestrator | + security_group_ids = (known after apply) 2026-02-19 02:27:51.680431 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.680435 | orchestrator | 2026-02-19 02:27:51.680438 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680442 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-19 02:27:51.680446 | orchestrator | } 2026-02-19 02:27:51.680450 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680453 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-19 02:27:51.680457 | orchestrator | } 2026-02-19 02:27:51.680461 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680466 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-19 02:27:51.680472 | orchestrator | } 2026-02-19 02:27:51.680478 | orchestrator | 2026-02-19 02:27:51.680484 | orchestrator | + binding (known after apply) 2026-02-19 02:27:51.680491 | orchestrator | 2026-02-19 02:27:51.680495 | orchestrator | + fixed_ip { 2026-02-19 02:27:51.680499 | orchestrator | + ip_address = "192.168.16.10" 2026-02-19 02:27:51.680502 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.680506 | orchestrator | } 2026-02-19 02:27:51.680510 | orchestrator | } 2026-02-19 02:27:51.680516 | orchestrator | 2026-02-19 02:27:51.680520 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-19 02:27:51.680524 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-19 02:27:51.680532 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.680535 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-19 02:27:51.680539 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-19 02:27:51.680543 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.680546 | orchestrator | + device_id = (known after apply) 2026-02-19 02:27:51.680550 | orchestrator | + device_owner = (known after apply) 2026-02-19 02:27:51.680556 | orchestrator | + dns_assignment = (known after apply) 2026-02-19 02:27:51.680562 | orchestrator | + dns_name = (known after apply) 2026-02-19 02:27:51.680568 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680575 | orchestrator | + mac_address = (known after apply) 2026-02-19 02:27:51.680579 | orchestrator | + network_id = (known after apply) 2026-02-19 02:27:51.680583 | orchestrator | + port_security_enabled = (known after apply) 2026-02-19 02:27:51.680586 | orchestrator | + qos_policy_id = (known after apply) 2026-02-19 02:27:51.680590 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680594 | orchestrator | + security_group_ids = (known after apply) 2026-02-19 02:27:51.680597 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.680601 | orchestrator | 2026-02-19 02:27:51.680605 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680608 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-19 02:27:51.680612 | orchestrator | } 2026-02-19 02:27:51.680616 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680620 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-19 02:27:51.680623 | orchestrator | } 2026-02-19 02:27:51.680627 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680631 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-19 02:27:51.680634 | orchestrator | } 2026-02-19 02:27:51.680638 | orchestrator | 2026-02-19 02:27:51.680642 | orchestrator | + binding (known after apply) 2026-02-19 02:27:51.680645 | orchestrator | 2026-02-19 02:27:51.680649 | orchestrator | + fixed_ip { 2026-02-19 02:27:51.680653 | orchestrator | + ip_address = "192.168.16.11" 2026-02-19 02:27:51.680656 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.680660 | orchestrator | } 2026-02-19 02:27:51.680664 | orchestrator | } 2026-02-19 02:27:51.680667 | orchestrator | 2026-02-19 02:27:51.680671 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-19 02:27:51.680675 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-19 02:27:51.680679 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.680682 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-19 02:27:51.680686 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-19 02:27:51.680690 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.680702 | orchestrator | + device_id = (known after apply) 2026-02-19 02:27:51.680705 | orchestrator | + device_owner = (known after apply) 2026-02-19 02:27:51.680709 | orchestrator | + dns_assignment = (known after apply) 2026-02-19 02:27:51.680713 | orchestrator | + dns_name = (known after apply) 2026-02-19 02:27:51.680716 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680720 | orchestrator | + mac_address = (known after apply) 2026-02-19 02:27:51.680724 | orchestrator | + network_id = (known after apply) 2026-02-19 02:27:51.680727 | orchestrator | + port_security_enabled = (known after apply) 2026-02-19 02:27:51.680731 | orchestrator | + qos_policy_id = (known after apply) 2026-02-19 02:27:51.680735 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680738 | orchestrator | + security_group_ids = (known after apply) 2026-02-19 02:27:51.680742 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.680746 | orchestrator | 2026-02-19 02:27:51.680749 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680753 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-19 02:27:51.680757 | orchestrator | } 2026-02-19 02:27:51.680760 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680764 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-19 02:27:51.680768 | orchestrator | } 2026-02-19 02:27:51.680772 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680775 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-19 02:27:51.680779 | orchestrator | } 2026-02-19 02:27:51.680783 | orchestrator | 2026-02-19 02:27:51.680786 | orchestrator | + binding (known after apply) 2026-02-19 02:27:51.680790 | orchestrator | 2026-02-19 02:27:51.680794 | orchestrator | + fixed_ip { 2026-02-19 02:27:51.680798 | orchestrator | + ip_address = "192.168.16.12" 2026-02-19 02:27:51.680801 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.680805 | orchestrator | } 2026-02-19 02:27:51.680809 | orchestrator | } 2026-02-19 02:27:51.680816 | orchestrator | 2026-02-19 02:27:51.680820 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-19 02:27:51.680824 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-19 02:27:51.680828 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.680832 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-19 02:27:51.680835 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-19 02:27:51.680839 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.680843 | orchestrator | + device_id = (known after apply) 2026-02-19 02:27:51.680846 | orchestrator | + device_owner = (known after apply) 2026-02-19 02:27:51.680850 | orchestrator | + dns_assignment = (known after apply) 2026-02-19 02:27:51.680854 | orchestrator | + dns_name = (known after apply) 2026-02-19 02:27:51.680857 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.680861 | orchestrator | + mac_address = (known after apply) 2026-02-19 02:27:51.680865 | orchestrator | + network_id = (known after apply) 2026-02-19 02:27:51.680868 | orchestrator | + port_security_enabled = (known after apply) 2026-02-19 02:27:51.680872 | orchestrator | + qos_policy_id = (known after apply) 2026-02-19 02:27:51.680876 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.680880 | orchestrator | + security_group_ids = (known after apply) 2026-02-19 02:27:51.680883 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.680887 | orchestrator | 2026-02-19 02:27:51.680891 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680894 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-19 02:27:51.680898 | orchestrator | } 2026-02-19 02:27:51.680902 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680905 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-19 02:27:51.680909 | orchestrator | } 2026-02-19 02:27:51.680913 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.680916 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-19 02:27:51.680920 | orchestrator | } 2026-02-19 02:27:51.680924 | orchestrator | 2026-02-19 02:27:51.680934 | orchestrator | + binding (known after apply) 2026-02-19 02:27:51.680940 | orchestrator | 2026-02-19 02:27:51.680945 | orchestrator | + fixed_ip { 2026-02-19 02:27:51.680953 | orchestrator | + ip_address = "192.168.16.13" 2026-02-19 02:27:51.680961 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.680966 | orchestrator | } 2026-02-19 02:27:51.680972 | orchestrator | } 2026-02-19 02:27:51.680978 | orchestrator | 2026-02-19 02:27:51.680983 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-19 02:27:51.680989 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-19 02:27:51.680995 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.681000 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-19 02:27:51.681007 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-19 02:27:51.681013 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.681018 | orchestrator | + device_id = (known after apply) 2026-02-19 02:27:51.681024 | orchestrator | + device_owner = (known after apply) 2026-02-19 02:27:51.681029 | orchestrator | + dns_assignment = (known after apply) 2026-02-19 02:27:51.681035 | orchestrator | + dns_name = (known after apply) 2026-02-19 02:27:51.681045 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681052 | orchestrator | + mac_address = (known after apply) 2026-02-19 02:27:51.681058 | orchestrator | + network_id = (known after apply) 2026-02-19 02:27:51.681064 | orchestrator | + port_security_enabled = (known after apply) 2026-02-19 02:27:51.681070 | orchestrator | + qos_policy_id = (known after apply) 2026-02-19 02:27:51.681075 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681082 | orchestrator | + security_group_ids = (known after apply) 2026-02-19 02:27:51.681088 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681094 | orchestrator | 2026-02-19 02:27:51.681098 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.681105 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-19 02:27:51.681109 | orchestrator | } 2026-02-19 02:27:51.681112 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.681116 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-19 02:27:51.681120 | orchestrator | } 2026-02-19 02:27:51.681124 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.681127 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-19 02:27:51.681131 | orchestrator | } 2026-02-19 02:27:51.681134 | orchestrator | 2026-02-19 02:27:51.681138 | orchestrator | + binding (known after apply) 2026-02-19 02:27:51.681142 | orchestrator | 2026-02-19 02:27:51.681146 | orchestrator | + fixed_ip { 2026-02-19 02:27:51.681149 | orchestrator | + ip_address = "192.168.16.14" 2026-02-19 02:27:51.681153 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.681157 | orchestrator | } 2026-02-19 02:27:51.681160 | orchestrator | } 2026-02-19 02:27:51.681164 | orchestrator | 2026-02-19 02:27:51.681168 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-19 02:27:51.681171 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-19 02:27:51.681175 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.681179 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-19 02:27:51.681183 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-19 02:27:51.681186 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.681190 | orchestrator | + device_id = (known after apply) 2026-02-19 02:27:51.681194 | orchestrator | + device_owner = (known after apply) 2026-02-19 02:27:51.681197 | orchestrator | + dns_assignment = (known after apply) 2026-02-19 02:27:51.681201 | orchestrator | + dns_name = (known after apply) 2026-02-19 02:27:51.681204 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681208 | orchestrator | + mac_address = (known after apply) 2026-02-19 02:27:51.681212 | orchestrator | + network_id = (known after apply) 2026-02-19 02:27:51.681215 | orchestrator | + port_security_enabled = (known after apply) 2026-02-19 02:27:51.681219 | orchestrator | + qos_policy_id = (known after apply) 2026-02-19 02:27:51.681227 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681231 | orchestrator | + security_group_ids = (known after apply) 2026-02-19 02:27:51.681235 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681238 | orchestrator | 2026-02-19 02:27:51.681242 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.681246 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-19 02:27:51.681249 | orchestrator | } 2026-02-19 02:27:51.681253 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.681257 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-19 02:27:51.681260 | orchestrator | } 2026-02-19 02:27:51.681264 | orchestrator | + allowed_address_pairs { 2026-02-19 02:27:51.681268 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-19 02:27:51.681271 | orchestrator | } 2026-02-19 02:27:51.681275 | orchestrator | 2026-02-19 02:27:51.681284 | orchestrator | + binding (known after apply) 2026-02-19 02:27:51.681288 | orchestrator | 2026-02-19 02:27:51.681292 | orchestrator | + fixed_ip { 2026-02-19 02:27:51.681295 | orchestrator | + ip_address = "192.168.16.15" 2026-02-19 02:27:51.681299 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.681303 | orchestrator | } 2026-02-19 02:27:51.681306 | orchestrator | } 2026-02-19 02:27:51.681310 | orchestrator | 2026-02-19 02:27:51.681314 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-19 02:27:51.681318 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-19 02:27:51.681335 | orchestrator | + force_destroy = false 2026-02-19 02:27:51.681340 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681343 | orchestrator | + port_id = (known after apply) 2026-02-19 02:27:51.681347 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681351 | orchestrator | + router_id = (known after apply) 2026-02-19 02:27:51.681355 | orchestrator | + subnet_id = (known after apply) 2026-02-19 02:27:51.681358 | orchestrator | } 2026-02-19 02:27:51.681362 | orchestrator | 2026-02-19 02:27:51.681366 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-19 02:27:51.681370 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-19 02:27:51.681373 | orchestrator | + admin_state_up = (known after apply) 2026-02-19 02:27:51.681377 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.681381 | orchestrator | + availability_zone_hints = [ 2026-02-19 02:27:51.681384 | orchestrator | + "nova", 2026-02-19 02:27:51.681388 | orchestrator | ] 2026-02-19 02:27:51.681392 | orchestrator | + distributed = (known after apply) 2026-02-19 02:27:51.681396 | orchestrator | + enable_snat = (known after apply) 2026-02-19 02:27:51.681399 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-19 02:27:51.681403 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-19 02:27:51.681407 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681410 | orchestrator | + name = "testbed" 2026-02-19 02:27:51.681414 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681418 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681422 | orchestrator | 2026-02-19 02:27:51.681425 | orchestrator | + external_fixed_ip (known after apply) 2026-02-19 02:27:51.681429 | orchestrator | } 2026-02-19 02:27:51.681433 | orchestrator | 2026-02-19 02:27:51.681436 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-19 02:27:51.681441 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-19 02:27:51.681445 | orchestrator | + description = "ssh" 2026-02-19 02:27:51.681449 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681452 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681456 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681460 | orchestrator | + port_range_max = 22 2026-02-19 02:27:51.681463 | orchestrator | + port_range_min = 22 2026-02-19 02:27:51.681467 | orchestrator | + protocol = "tcp" 2026-02-19 02:27:51.681470 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681480 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681484 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681488 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-19 02:27:51.681491 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681495 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681499 | orchestrator | } 2026-02-19 02:27:51.681502 | orchestrator | 2026-02-19 02:27:51.681506 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-19 02:27:51.681510 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-19 02:27:51.681514 | orchestrator | + description = "wireguard" 2026-02-19 02:27:51.681517 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681521 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681525 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681528 | orchestrator | + port_range_max = 51820 2026-02-19 02:27:51.681532 | orchestrator | + port_range_min = 51820 2026-02-19 02:27:51.681536 | orchestrator | + protocol = "udp" 2026-02-19 02:27:51.681539 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681543 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681547 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681550 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-19 02:27:51.681554 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681557 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681561 | orchestrator | } 2026-02-19 02:27:51.681565 | orchestrator | 2026-02-19 02:27:51.681569 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-19 02:27:51.681572 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-19 02:27:51.681579 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681583 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681587 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681591 | orchestrator | + protocol = "tcp" 2026-02-19 02:27:51.681594 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681598 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681602 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681605 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-19 02:27:51.681609 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681613 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681616 | orchestrator | } 2026-02-19 02:27:51.681620 | orchestrator | 2026-02-19 02:27:51.681624 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-19 02:27:51.681628 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-19 02:27:51.681631 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681635 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681639 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681642 | orchestrator | + protocol = "udp" 2026-02-19 02:27:51.681646 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681650 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681657 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681661 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-19 02:27:51.681665 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681669 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681672 | orchestrator | } 2026-02-19 02:27:51.681676 | orchestrator | 2026-02-19 02:27:51.681680 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-19 02:27:51.681687 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-19 02:27:51.681691 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681695 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681698 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681702 | orchestrator | + protocol = "icmp" 2026-02-19 02:27:51.681706 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681710 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681713 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681717 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-19 02:27:51.681721 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681724 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681728 | orchestrator | } 2026-02-19 02:27:51.681732 | orchestrator | 2026-02-19 02:27:51.681735 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-19 02:27:51.681739 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-19 02:27:51.681743 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681747 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681750 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681754 | orchestrator | + protocol = "tcp" 2026-02-19 02:27:51.681758 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681761 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681765 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681769 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-19 02:27:51.681772 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681776 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681780 | orchestrator | } 2026-02-19 02:27:51.681783 | orchestrator | 2026-02-19 02:27:51.681787 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-19 02:27:51.681791 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-19 02:27:51.681795 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681798 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681802 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681806 | orchestrator | + protocol = "udp" 2026-02-19 02:27:51.681809 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681813 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681817 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681820 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-19 02:27:51.681824 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681828 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681831 | orchestrator | } 2026-02-19 02:27:51.681835 | orchestrator | 2026-02-19 02:27:51.681839 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-19 02:27:51.681842 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-19 02:27:51.681846 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681850 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681854 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681857 | orchestrator | + protocol = "icmp" 2026-02-19 02:27:51.681861 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681865 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681868 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681872 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-19 02:27:51.681876 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681879 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681886 | orchestrator | } 2026-02-19 02:27:51.681890 | orchestrator | 2026-02-19 02:27:51.681894 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-19 02:27:51.681897 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-19 02:27:51.681901 | orchestrator | + description = "vrrp" 2026-02-19 02:27:51.681905 | orchestrator | + direction = "ingress" 2026-02-19 02:27:51.681909 | orchestrator | + ethertype = "IPv4" 2026-02-19 02:27:51.681912 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681916 | orchestrator | + protocol = "112" 2026-02-19 02:27:51.681920 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681923 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-19 02:27:51.681927 | orchestrator | + remote_group_id = (known after apply) 2026-02-19 02:27:51.681931 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-19 02:27:51.681934 | orchestrator | + security_group_id = (known after apply) 2026-02-19 02:27:51.681938 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681942 | orchestrator | } 2026-02-19 02:27:51.681946 | orchestrator | 2026-02-19 02:27:51.681949 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-19 02:27:51.681953 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-19 02:27:51.681957 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.681961 | orchestrator | + description = "management security group" 2026-02-19 02:27:51.681964 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.681968 | orchestrator | + name = "testbed-management" 2026-02-19 02:27:51.681972 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.681975 | orchestrator | + stateful = (known after apply) 2026-02-19 02:27:51.681979 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.681983 | orchestrator | } 2026-02-19 02:27:51.681986 | orchestrator | 2026-02-19 02:27:51.682000 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-19 02:27:51.682004 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-19 02:27:51.682008 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.682011 | orchestrator | + description = "node security group" 2026-02-19 02:27:51.682038 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.682042 | orchestrator | + name = "testbed-node" 2026-02-19 02:27:51.682046 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.682050 | orchestrator | + stateful = (known after apply) 2026-02-19 02:27:51.682053 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.682057 | orchestrator | } 2026-02-19 02:27:51.682061 | orchestrator | 2026-02-19 02:27:51.682065 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-19 02:27:51.682068 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-19 02:27:51.682072 | orchestrator | + all_tags = (known after apply) 2026-02-19 02:27:51.682076 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-19 02:27:51.682080 | orchestrator | + dns_nameservers = [ 2026-02-19 02:27:51.682084 | orchestrator | + "8.8.8.8", 2026-02-19 02:27:51.682087 | orchestrator | + "9.9.9.9", 2026-02-19 02:27:51.682091 | orchestrator | ] 2026-02-19 02:27:51.682095 | orchestrator | + enable_dhcp = true 2026-02-19 02:27:51.682099 | orchestrator | + gateway_ip = (known after apply) 2026-02-19 02:27:51.682106 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.682110 | orchestrator | + ip_version = 4 2026-02-19 02:27:51.682114 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-19 02:27:51.682118 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-19 02:27:51.682121 | orchestrator | + name = "subnet-testbed-management" 2026-02-19 02:27:51.682125 | orchestrator | + network_id = (known after apply) 2026-02-19 02:27:51.682129 | orchestrator | + no_gateway = false 2026-02-19 02:27:51.682132 | orchestrator | + region = (known after apply) 2026-02-19 02:27:51.682136 | orchestrator | + service_types = (known after apply) 2026-02-19 02:27:51.682144 | orchestrator | + tenant_id = (known after apply) 2026-02-19 02:27:51.682147 | orchestrator | 2026-02-19 02:27:51.682151 | orchestrator | + allocation_pool { 2026-02-19 02:27:51.682155 | orchestrator | + end = "192.168.31.250" 2026-02-19 02:27:51.682159 | orchestrator | + start = "192.168.31.200" 2026-02-19 02:27:51.682162 | orchestrator | } 2026-02-19 02:27:51.682166 | orchestrator | } 2026-02-19 02:27:51.682170 | orchestrator | 2026-02-19 02:27:51.682174 | orchestrator | # terraform_data.image will be created 2026-02-19 02:27:51.682177 | orchestrator | + resource "terraform_data" "image" { 2026-02-19 02:27:51.682181 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.682185 | orchestrator | + input = "Ubuntu 24.04" 2026-02-19 02:27:51.682189 | orchestrator | + output = (known after apply) 2026-02-19 02:27:51.682192 | orchestrator | } 2026-02-19 02:27:51.682196 | orchestrator | 2026-02-19 02:27:51.682200 | orchestrator | # terraform_data.image_node will be created 2026-02-19 02:27:51.682203 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-19 02:27:51.682207 | orchestrator | + id = (known after apply) 2026-02-19 02:27:51.682211 | orchestrator | + input = "Ubuntu 24.04" 2026-02-19 02:27:51.682214 | orchestrator | + output = (known after apply) 2026-02-19 02:27:51.682218 | orchestrator | } 2026-02-19 02:27:51.682222 | orchestrator | 2026-02-19 02:27:51.682226 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-19 02:27:51.682229 | orchestrator | 2026-02-19 02:27:51.682233 | orchestrator | Changes to Outputs: 2026-02-19 02:27:51.682237 | orchestrator | + manager_address = (sensitive value) 2026-02-19 02:27:51.682240 | orchestrator | + private_key = (sensitive value) 2026-02-19 02:27:51.923181 | orchestrator | terraform_data.image: Creating... 2026-02-19 02:27:51.923302 | orchestrator | terraform_data.image: Creation complete after 0s [id=3122911e-49b5-1cd5-9ba2-c5178e78fd27] 2026-02-19 02:27:51.923402 | orchestrator | terraform_data.image_node: Creating... 2026-02-19 02:27:51.924098 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=071f0f46-d5d3-95e9-2aed-0bd00828c10e] 2026-02-19 02:27:51.944451 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-19 02:27:51.949305 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-19 02:27:51.949391 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-19 02:27:51.957568 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-19 02:27:51.960636 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-19 02:27:51.960724 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-19 02:27:51.961257 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-19 02:27:51.965016 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-19 02:27:51.966096 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-19 02:27:51.966273 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-19 02:27:52.441567 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-19 02:27:52.447115 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-19 02:27:52.463661 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-19 02:27:52.465065 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-19 02:27:52.740737 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-19 02:27:52.749442 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-19 02:27:52.867472 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=8f465e69-bfe4-4114-a24b-340413601d71] 2026-02-19 02:27:52.873161 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-19 02:27:55.594144 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=74afed04-a71e-4a02-a193-e459fbff666b] 2026-02-19 02:27:55.607206 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=c337844b-d29f-48f9-b97b-1b04477f979e] 2026-02-19 02:27:55.607320 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=170e0235-dc73-4e1c-89b5-c2562fe21aa0] 2026-02-19 02:27:55.613965 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-19 02:27:55.614376 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-19 02:27:55.619713 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-19 02:27:55.621939 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=06128b56-8ab2-4257-b6d0-e15d23330262] 2026-02-19 02:27:55.626092 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=e6e1d1de8c1caf593709ea7c43cbf776a60c8843] 2026-02-19 02:27:55.626655 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-19 02:27:55.628455 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=3e7ac312bd3b3f41e1981455266729ef25435a57] 2026-02-19 02:27:55.628513 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=4779b863-88a8-4699-869f-263c4bc04c46] 2026-02-19 02:27:55.633938 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-19 02:27:55.634647 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-19 02:27:55.640704 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-19 02:27:55.642246 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=85ad02dc-7182-4f7f-aeb0-a64abf6b1c58] 2026-02-19 02:27:55.647647 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-19 02:27:55.701699 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=50533a39-fac2-4c6c-8c30-88a176048417] 2026-02-19 02:27:55.716994 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-19 02:27:55.723878 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=c1412cfc-917e-4010-87bd-d14c29c1eff8] 2026-02-19 02:27:55.956243 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=eb0041fe-9a39-4a97-a19c-5bfadd191a42] 2026-02-19 02:27:56.204147 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=b283ac38-22f6-4db4-ae2a-791f04f43aaf] 2026-02-19 02:27:56.666837 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=62afc33e-f877-4eb3-80e7-f8649a86241e] 2026-02-19 02:27:56.674324 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-19 02:27:59.025616 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=b7aa0e34-9a3e-479c-b466-47f6ccb691a2] 2026-02-19 02:27:59.045957 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=23a82e55-09a4-48a2-8455-a56aa9578cd9] 2026-02-19 02:27:59.079791 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4] 2026-02-19 02:27:59.104660 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=28e9d7a7-0f4d-4da3-8222-650c024604ec] 2026-02-19 02:27:59.133919 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=b5b78108-03a3-45a4-88e7-9b1ec0e9e95a] 2026-02-19 02:27:59.236051 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=2d17f80a-f41c-4c05-91d8-d602b7f93b84] 2026-02-19 02:27:59.721275 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=bcba7edf-d1f9-414b-9a4f-1b653123033b] 2026-02-19 02:27:59.729744 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-19 02:27:59.729801 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-19 02:27:59.732388 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-19 02:27:59.952941 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=2190e03a-13fb-4036-ad13-6934543df0fb] 2026-02-19 02:27:59.961163 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=80a07eb8-c2fc-4b44-b53c-94ec75c4ffb1] 2026-02-19 02:27:59.963477 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-19 02:27:59.964071 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-19 02:27:59.965692 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-19 02:27:59.968570 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-19 02:27:59.975115 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-19 02:27:59.976169 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-19 02:27:59.976893 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-19 02:27:59.982172 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-19 02:27:59.984518 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-19 02:28:00.145749 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=6e328ae3-4a9b-4d0d-9147-640cad59e338] 2026-02-19 02:28:00.155427 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-19 02:28:00.306061 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=507c8aac-c2a4-4366-b037-3649dbede1f1] 2026-02-19 02:28:00.316478 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-19 02:28:00.379806 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=23ec6c5b-8506-4a6b-95e2-80c365fcc092] 2026-02-19 02:28:00.391265 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-19 02:28:00.479951 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=607a9a9b-be23-4032-a3ca-7c46560c3714] 2026-02-19 02:28:00.486650 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-19 02:28:00.637558 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9d3b4695-258d-4005-86a3-16a315a81d60] 2026-02-19 02:28:00.648930 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-19 02:28:00.655901 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c128cad6-210e-4dd4-8815-c542692d1a56] 2026-02-19 02:28:00.665848 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-19 02:28:00.710922 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=0925b299-5d99-4a32-89d2-070570b8d105] 2026-02-19 02:28:00.717849 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-19 02:28:00.870924 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=8123b7f3-22c3-4bc8-831b-efcb20dc28bc] 2026-02-19 02:28:00.929936 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=8f3836b3-849d-4c60-9b37-d2719366af54] 2026-02-19 02:28:01.214486 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=df019123-e092-41ac-9f71-782d357c3595] 2026-02-19 02:28:01.233726 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=c94ce648-1034-4b19-be6f-d95c149f5abb] 2026-02-19 02:28:01.280975 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=2752e1ec-cc68-4ca1-8208-3aec1c83104c] 2026-02-19 02:28:01.317918 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=b8f098e2-e824-41f2-acc6-a74a8844957e] 2026-02-19 02:28:01.444186 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=51280aed-5ea9-4780-955b-01cc88a4ac29] 2026-02-19 02:28:01.527998 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=e3a29525-b68c-4c03-952d-553fea6d867e] 2026-02-19 02:28:01.683045 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=38a6dd78-62c0-4dc2-8c47-ca6373c2c31f] 2026-02-19 02:28:02.606148 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=fd7b2bce-3f97-4777-91be-dbdcabfc7af7] 2026-02-19 02:28:02.628855 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-19 02:28:02.641825 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-19 02:28:02.642706 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-19 02:28:02.643055 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-19 02:28:02.653180 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-19 02:28:02.653637 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-19 02:28:02.659938 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-19 02:28:04.130734 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=ca3753e6-3828-483f-8230-f452415156e4] 2026-02-19 02:28:04.138235 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-19 02:28:04.254488 | orchestrator | local_file.inventory: Creating... 2026-02-19 02:28:04.254593 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-19 02:28:04.254604 | orchestrator | local_file.inventory: Creation complete after 0s [id=6f219868f8d211084369a73e9285ad77764790f6] 2026-02-19 02:28:04.254613 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ff37b475084760d4e5b7f62427afbcefcde9910b] 2026-02-19 02:28:04.919493 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ca3753e6-3828-483f-8230-f452415156e4] 2026-02-19 02:28:12.642112 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-19 02:28:12.643303 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-19 02:28:12.643357 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-19 02:28:12.654654 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-19 02:28:12.655846 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-19 02:28:12.665031 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-19 02:28:22.643490 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-19 02:28:22.643566 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-19 02:28:22.643582 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-19 02:28:22.655941 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-19 02:28:22.656080 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-19 02:28:22.665525 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-19 02:28:23.159928 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=7e5f80ee-5cf4-4a5b-89f2-ac3820763181] 2026-02-19 02:28:23.221674 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=43e6f7cb-4f27-40bb-ac2c-5810139f0cca] 2026-02-19 02:28:23.797924 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=c2944694-b77c-44c5-8f37-f389568031ea] 2026-02-19 02:28:32.652136 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-19 02:28:32.656334 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-19 02:28:32.665698 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-19 02:28:33.372712 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=0715b8b3-c047-4fec-9afc-d1a7008c7769] 2026-02-19 02:28:33.690341 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=3bd0ac25-bbae-44b8-8be9-7e79b2681278] 2026-02-19 02:28:33.900552 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=ebade39e-92a2-4d78-bc25-fdd075827f6f] 2026-02-19 02:28:33.924472 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-19 02:28:33.943050 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6895862602551578690] 2026-02-19 02:28:33.943193 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-19 02:28:33.944255 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-19 02:28:33.948199 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-19 02:28:33.950117 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-19 02:28:33.952830 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-19 02:28:33.954602 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-19 02:28:33.955815 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-19 02:28:33.981209 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-19 02:28:33.985490 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-19 02:28:33.990534 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-19 02:28:37.321919 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=43e6f7cb-4f27-40bb-ac2c-5810139f0cca/85ad02dc-7182-4f7f-aeb0-a64abf6b1c58] 2026-02-19 02:28:37.353163 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=ebade39e-92a2-4d78-bc25-fdd075827f6f/74afed04-a71e-4a02-a193-e459fbff666b] 2026-02-19 02:28:37.367810 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=3bd0ac25-bbae-44b8-8be9-7e79b2681278/50533a39-fac2-4c6c-8c30-88a176048417] 2026-02-19 02:28:37.391281 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=43e6f7cb-4f27-40bb-ac2c-5810139f0cca/170e0235-dc73-4e1c-89b5-c2562fe21aa0] 2026-02-19 02:28:37.399635 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=ebade39e-92a2-4d78-bc25-fdd075827f6f/4779b863-88a8-4699-869f-263c4bc04c46] 2026-02-19 02:28:37.416784 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=3bd0ac25-bbae-44b8-8be9-7e79b2681278/c1412cfc-917e-4010-87bd-d14c29c1eff8] 2026-02-19 02:28:43.499632 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=43e6f7cb-4f27-40bb-ac2c-5810139f0cca/06128b56-8ab2-4257-b6d0-e15d23330262] 2026-02-19 02:28:43.513735 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=ebade39e-92a2-4d78-bc25-fdd075827f6f/eb0041fe-9a39-4a97-a19c-5bfadd191a42] 2026-02-19 02:28:43.540547 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=3bd0ac25-bbae-44b8-8be9-7e79b2681278/c337844b-d29f-48f9-b97b-1b04477f979e] 2026-02-19 02:28:43.995594 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-19 02:28:53.996636 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-19 02:28:54.315730 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=755beb4d-ff66-4f9f-99cb-b5075a2154f9] 2026-02-19 02:28:54.338102 | orchestrator | 2026-02-19 02:28:54.338168 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-19 02:28:54.338192 | orchestrator | 2026-02-19 02:28:54.338200 | orchestrator | Outputs: 2026-02-19 02:28:54.338207 | orchestrator | 2026-02-19 02:28:54.338214 | orchestrator | manager_address = 2026-02-19 02:28:54.338221 | orchestrator | private_key = 2026-02-19 02:28:54.556706 | orchestrator | ok: Runtime: 0:01:08.393025 2026-02-19 02:28:54.589826 | 2026-02-19 02:28:54.589985 | TASK [Fetch manager address] 2026-02-19 02:28:55.086861 | orchestrator | ok 2026-02-19 02:28:55.097121 | 2026-02-19 02:28:55.097301 | TASK [Set manager_host address] 2026-02-19 02:28:55.176668 | orchestrator | ok 2026-02-19 02:28:55.185991 | 2026-02-19 02:28:55.186132 | LOOP [Update ansible collections] 2026-02-19 02:28:58.049672 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-19 02:28:58.050075 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-19 02:28:58.050197 | orchestrator | Starting galaxy collection install process 2026-02-19 02:28:58.050242 | orchestrator | Process install dependency map 2026-02-19 02:28:58.050275 | orchestrator | Starting collection install process 2026-02-19 02:28:58.050306 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-19 02:28:58.050342 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-19 02:28:58.050378 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-19 02:28:58.050456 | orchestrator | ok: Item: commons Runtime: 0:00:02.477266 2026-02-19 02:28:59.122221 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-19 02:28:59.122489 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-19 02:28:59.122576 | orchestrator | Starting galaxy collection install process 2026-02-19 02:28:59.122638 | orchestrator | Process install dependency map 2026-02-19 02:28:59.122696 | orchestrator | Starting collection install process 2026-02-19 02:28:59.122748 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-19 02:28:59.122801 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-19 02:28:59.122960 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-19 02:28:59.123042 | orchestrator | ok: Item: services Runtime: 0:00:00.715943 2026-02-19 02:28:59.148984 | 2026-02-19 02:28:59.149241 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-19 02:29:09.745301 | orchestrator | ok 2026-02-19 02:29:09.755186 | 2026-02-19 02:29:09.755301 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-19 02:30:09.807221 | orchestrator | ok 2026-02-19 02:30:09.818507 | 2026-02-19 02:30:09.818654 | TASK [Fetch manager ssh hostkey] 2026-02-19 02:30:11.402278 | orchestrator | Output suppressed because no_log was given 2026-02-19 02:30:11.420822 | 2026-02-19 02:30:11.421017 | TASK [Get ssh keypair from terraform environment] 2026-02-19 02:30:11.958721 | orchestrator | ok: Runtime: 0:00:00.008194 2026-02-19 02:30:11.973975 | 2026-02-19 02:30:11.974217 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-19 02:30:12.023565 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-19 02:30:12.033861 | 2026-02-19 02:30:12.033999 | TASK [Run manager part 0] 2026-02-19 02:30:13.237866 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-19 02:30:13.304362 | orchestrator | 2026-02-19 02:30:13.304428 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-19 02:30:13.304439 | orchestrator | 2026-02-19 02:30:13.304457 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-19 02:30:15.028529 | orchestrator | ok: [testbed-manager] 2026-02-19 02:30:15.028566 | orchestrator | 2026-02-19 02:30:15.028584 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-19 02:30:15.028592 | orchestrator | 2026-02-19 02:30:15.028600 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:30:16.846602 | orchestrator | ok: [testbed-manager] 2026-02-19 02:30:16.846647 | orchestrator | 2026-02-19 02:30:16.846654 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-19 02:30:17.488558 | orchestrator | ok: [testbed-manager] 2026-02-19 02:30:17.488619 | orchestrator | 2026-02-19 02:30:17.488630 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-19 02:30:17.537519 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:30:17.537563 | orchestrator | 2026-02-19 02:30:17.537573 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-19 02:30:17.569283 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:30:17.569356 | orchestrator | 2026-02-19 02:30:17.569373 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-19 02:30:17.603432 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:30:17.603499 | orchestrator | 2026-02-19 02:30:17.603510 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-19 02:30:17.645766 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:30:17.645826 | orchestrator | 2026-02-19 02:30:17.645835 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-19 02:30:17.681470 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:30:17.681525 | orchestrator | 2026-02-19 02:30:17.681535 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-19 02:30:17.723358 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:30:17.723432 | orchestrator | 2026-02-19 02:30:17.723444 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-19 02:30:17.766413 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:30:17.766483 | orchestrator | 2026-02-19 02:30:17.766493 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-19 02:30:18.468641 | orchestrator | changed: [testbed-manager] 2026-02-19 02:30:18.468747 | orchestrator | 2026-02-19 02:30:18.468755 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-19 02:32:50.309721 | orchestrator | changed: [testbed-manager] 2026-02-19 02:32:50.309771 | orchestrator | 2026-02-19 02:32:50.309779 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-19 02:34:20.277304 | orchestrator | changed: [testbed-manager] 2026-02-19 02:34:20.277390 | orchestrator | 2026-02-19 02:34:20.277402 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-19 02:34:39.185675 | orchestrator | changed: [testbed-manager] 2026-02-19 02:34:39.185800 | orchestrator | 2026-02-19 02:34:39.185822 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-19 02:34:47.930465 | orchestrator | changed: [testbed-manager] 2026-02-19 02:34:47.930554 | orchestrator | 2026-02-19 02:34:47.930566 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-19 02:34:47.988103 | orchestrator | ok: [testbed-manager] 2026-02-19 02:34:47.988223 | orchestrator | 2026-02-19 02:34:47.988240 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-19 02:34:48.762342 | orchestrator | ok: [testbed-manager] 2026-02-19 02:34:48.762433 | orchestrator | 2026-02-19 02:34:48.762445 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-19 02:34:49.484598 | orchestrator | changed: [testbed-manager] 2026-02-19 02:34:49.484674 | orchestrator | 2026-02-19 02:34:49.484686 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-19 02:34:55.403887 | orchestrator | changed: [testbed-manager] 2026-02-19 02:34:55.403964 | orchestrator | 2026-02-19 02:34:55.403993 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-19 02:35:00.609744 | orchestrator | changed: [testbed-manager] 2026-02-19 02:35:00.610471 | orchestrator | 2026-02-19 02:35:00.610509 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-19 02:35:02.990286 | orchestrator | changed: [testbed-manager] 2026-02-19 02:35:02.990354 | orchestrator | 2026-02-19 02:35:02.990363 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-19 02:35:04.564487 | orchestrator | changed: [testbed-manager] 2026-02-19 02:35:04.564575 | orchestrator | 2026-02-19 02:35:04.564599 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-19 02:35:05.565411 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-19 02:35:05.565494 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-19 02:35:05.565506 | orchestrator | 2026-02-19 02:35:05.565518 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-19 02:35:05.610713 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-19 02:35:05.610843 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-19 02:35:05.610862 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-19 02:35:05.610875 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-19 02:35:12.612550 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-19 02:35:12.612645 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-19 02:35:12.612658 | orchestrator | 2026-02-19 02:35:12.612669 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-19 02:35:13.138976 | orchestrator | changed: [testbed-manager] 2026-02-19 02:35:13.139054 | orchestrator | 2026-02-19 02:35:13.139064 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-19 02:36:32.713385 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-19 02:36:32.713631 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-19 02:36:32.713659 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-19 02:36:32.713675 | orchestrator | 2026-02-19 02:36:32.713691 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-19 02:36:34.781132 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-19 02:36:34.781169 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-19 02:36:34.781174 | orchestrator | 2026-02-19 02:36:34.781180 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-19 02:36:34.781185 | orchestrator | 2026-02-19 02:36:34.781189 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:36:36.083758 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:36.083795 | orchestrator | 2026-02-19 02:36:36.083802 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-19 02:36:36.123917 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:36.123966 | orchestrator | 2026-02-19 02:36:36.123975 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-19 02:36:36.200378 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:36.200423 | orchestrator | 2026-02-19 02:36:36.200459 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-19 02:36:36.904276 | orchestrator | changed: [testbed-manager] 2026-02-19 02:36:36.904320 | orchestrator | 2026-02-19 02:36:36.904329 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-19 02:36:37.586854 | orchestrator | changed: [testbed-manager] 2026-02-19 02:36:37.586890 | orchestrator | 2026-02-19 02:36:37.586896 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-19 02:36:38.832230 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-19 02:36:38.832309 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-19 02:36:38.832320 | orchestrator | 2026-02-19 02:36:38.832344 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-19 02:36:40.214901 | orchestrator | changed: [testbed-manager] 2026-02-19 02:36:40.214996 | orchestrator | 2026-02-19 02:36:40.215006 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-19 02:36:41.845289 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-19 02:36:41.845377 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-19 02:36:41.845394 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-19 02:36:41.845408 | orchestrator | 2026-02-19 02:36:41.845425 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-19 02:36:41.903759 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:41.903845 | orchestrator | 2026-02-19 02:36:41.903857 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-19 02:36:41.978564 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:41.978640 | orchestrator | 2026-02-19 02:36:41.978651 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-19 02:36:42.480046 | orchestrator | changed: [testbed-manager] 2026-02-19 02:36:42.480154 | orchestrator | 2026-02-19 02:36:42.480178 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-19 02:36:42.548710 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:42.548776 | orchestrator | 2026-02-19 02:36:42.548784 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-19 02:36:43.346427 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-19 02:36:43.346528 | orchestrator | changed: [testbed-manager] 2026-02-19 02:36:43.346541 | orchestrator | 2026-02-19 02:36:43.346550 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-19 02:36:43.379605 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:43.379675 | orchestrator | 2026-02-19 02:36:43.379684 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-19 02:36:43.416367 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:43.416440 | orchestrator | 2026-02-19 02:36:43.416493 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-19 02:36:43.453410 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:43.453518 | orchestrator | 2026-02-19 02:36:43.453532 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-19 02:36:43.523778 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:43.523877 | orchestrator | 2026-02-19 02:36:43.523894 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-19 02:36:44.206754 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:44.207307 | orchestrator | 2026-02-19 02:36:44.207326 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-19 02:36:44.207334 | orchestrator | 2026-02-19 02:36:44.207341 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:36:45.492132 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:45.492228 | orchestrator | 2026-02-19 02:36:45.492244 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-19 02:36:46.400480 | orchestrator | changed: [testbed-manager] 2026-02-19 02:36:46.400609 | orchestrator | 2026-02-19 02:36:46.400633 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:36:46.400644 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-19 02:36:46.400654 | orchestrator | 2026-02-19 02:36:46.819704 | orchestrator | ok: Runtime: 0:06:34.156343 2026-02-19 02:36:46.839333 | 2026-02-19 02:36:46.839495 | TASK [Point out that the log in on the manager is now possible] 2026-02-19 02:36:46.882311 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-19 02:36:46.891239 | 2026-02-19 02:36:46.891370 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-19 02:36:46.933285 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-19 02:36:46.941430 | 2026-02-19 02:36:46.941538 | TASK [Run manager part 1 + 2] 2026-02-19 02:36:47.823253 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-19 02:36:47.886342 | orchestrator | 2026-02-19 02:36:47.886411 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-19 02:36:47.886421 | orchestrator | 2026-02-19 02:36:47.886439 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:36:50.606570 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:50.606623 | orchestrator | 2026-02-19 02:36:50.606647 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-19 02:36:50.663094 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:50.663168 | orchestrator | 2026-02-19 02:36:50.663187 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-19 02:36:50.709814 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:50.709869 | orchestrator | 2026-02-19 02:36:50.709879 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-19 02:36:50.761399 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:50.761449 | orchestrator | 2026-02-19 02:36:50.761458 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-19 02:36:50.836015 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:50.836067 | orchestrator | 2026-02-19 02:36:50.836078 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-19 02:36:50.899700 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:50.899765 | orchestrator | 2026-02-19 02:36:50.899779 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-19 02:36:50.959326 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-19 02:36:50.959378 | orchestrator | 2026-02-19 02:36:50.959386 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-19 02:36:51.646945 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:51.647017 | orchestrator | 2026-02-19 02:36:51.647034 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-19 02:36:51.700239 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:36:51.700293 | orchestrator | 2026-02-19 02:36:51.700302 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-19 02:36:53.027010 | orchestrator | changed: [testbed-manager] 2026-02-19 02:36:53.027071 | orchestrator | 2026-02-19 02:36:53.027082 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-19 02:36:53.554233 | orchestrator | ok: [testbed-manager] 2026-02-19 02:36:53.554314 | orchestrator | 2026-02-19 02:36:53.554327 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-19 02:36:54.628189 | orchestrator | changed: [testbed-manager] 2026-02-19 02:36:54.628394 | orchestrator | 2026-02-19 02:36:54.628428 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-19 02:37:08.928074 | orchestrator | changed: [testbed-manager] 2026-02-19 02:37:08.928177 | orchestrator | 2026-02-19 02:37:08.928194 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-19 02:37:09.534776 | orchestrator | ok: [testbed-manager] 2026-02-19 02:37:09.534817 | orchestrator | 2026-02-19 02:37:09.534827 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-19 02:37:09.595115 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:37:09.595157 | orchestrator | 2026-02-19 02:37:09.595165 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-19 02:37:10.450575 | orchestrator | changed: [testbed-manager] 2026-02-19 02:37:10.450624 | orchestrator | 2026-02-19 02:37:10.450633 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-19 02:37:11.320747 | orchestrator | changed: [testbed-manager] 2026-02-19 02:37:11.320828 | orchestrator | 2026-02-19 02:37:11.320839 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-19 02:37:11.846499 | orchestrator | changed: [testbed-manager] 2026-02-19 02:37:11.846606 | orchestrator | 2026-02-19 02:37:11.846620 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-19 02:37:11.884236 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-19 02:37:11.884397 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-19 02:37:11.884430 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-19 02:37:11.884449 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-19 02:37:14.171446 | orchestrator | changed: [testbed-manager] 2026-02-19 02:37:14.171498 | orchestrator | 2026-02-19 02:37:14.171507 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-19 02:37:22.641270 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-19 02:37:22.641345 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-19 02:37:22.641356 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-19 02:37:22.641363 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-19 02:37:22.641378 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-19 02:37:22.641386 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-19 02:37:22.641392 | orchestrator | 2026-02-19 02:37:22.641401 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-19 02:37:23.646168 | orchestrator | changed: [testbed-manager] 2026-02-19 02:37:23.646271 | orchestrator | 2026-02-19 02:37:23.646280 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-19 02:37:23.679328 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:37:23.679437 | orchestrator | 2026-02-19 02:37:23.679444 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-19 02:37:26.581432 | orchestrator | changed: [testbed-manager] 2026-02-19 02:37:26.581521 | orchestrator | 2026-02-19 02:37:26.581530 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-19 02:37:26.625290 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:37:26.625378 | orchestrator | 2026-02-19 02:37:26.625391 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-19 02:38:52.404092 | orchestrator | changed: [testbed-manager] 2026-02-19 02:38:52.404131 | orchestrator | 2026-02-19 02:38:52.404140 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-19 02:38:53.352224 | orchestrator | ok: [testbed-manager] 2026-02-19 02:38:53.352320 | orchestrator | 2026-02-19 02:38:53.352338 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:38:53.352352 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-19 02:38:53.352364 | orchestrator | 2026-02-19 02:38:53.554559 | orchestrator | ok: Runtime: 0:02:06.197803 2026-02-19 02:38:53.571656 | 2026-02-19 02:38:53.571800 | TASK [Reboot manager] 2026-02-19 02:38:55.111479 | orchestrator | ok: Runtime: 0:00:00.907336 2026-02-19 02:38:55.128910 | 2026-02-19 02:38:55.129118 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-19 02:39:07.893218 | orchestrator | ok 2026-02-19 02:39:07.900702 | 2026-02-19 02:39:07.900812 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-19 02:40:07.949286 | orchestrator | ok 2026-02-19 02:40:07.958405 | 2026-02-19 02:40:07.958520 | TASK [Deploy manager + bootstrap nodes] 2026-02-19 02:40:10.161151 | orchestrator | 2026-02-19 02:40:10.161377 | orchestrator | # DEPLOY MANAGER 2026-02-19 02:40:10.161397 | orchestrator | 2026-02-19 02:40:10.161408 | orchestrator | + set -e 2026-02-19 02:40:10.161418 | orchestrator | + echo 2026-02-19 02:40:10.161426 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-19 02:40:10.161435 | orchestrator | + echo 2026-02-19 02:40:10.161463 | orchestrator | + cat /opt/manager-vars.sh 2026-02-19 02:40:10.164225 | orchestrator | export NUMBER_OF_NODES=6 2026-02-19 02:40:10.164293 | orchestrator | 2026-02-19 02:40:10.164305 | orchestrator | export CEPH_VERSION=reef 2026-02-19 02:40:10.164316 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-19 02:40:10.164326 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-19 02:40:10.164347 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-19 02:40:10.164356 | orchestrator | 2026-02-19 02:40:10.164369 | orchestrator | export ARA=false 2026-02-19 02:40:10.164378 | orchestrator | export DEPLOY_MODE=manager 2026-02-19 02:40:10.164391 | orchestrator | export TEMPEST=false 2026-02-19 02:40:10.164399 | orchestrator | export IS_ZUUL=true 2026-02-19 02:40:10.164407 | orchestrator | 2026-02-19 02:40:10.164420 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 02:40:10.164429 | orchestrator | export EXTERNAL_API=false 2026-02-19 02:40:10.164437 | orchestrator | 2026-02-19 02:40:10.164445 | orchestrator | export IMAGE_USER=ubuntu 2026-02-19 02:40:10.164458 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-19 02:40:10.164466 | orchestrator | 2026-02-19 02:40:10.164474 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-19 02:40:10.164491 | orchestrator | 2026-02-19 02:40:10.164500 | orchestrator | + echo 2026-02-19 02:40:10.164509 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 02:40:10.165061 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 02:40:10.165088 | orchestrator | ++ INTERACTIVE=false 2026-02-19 02:40:10.165097 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 02:40:10.165106 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 02:40:10.165190 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 02:40:10.165201 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 02:40:10.165210 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 02:40:10.165218 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 02:40:10.165226 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 02:40:10.165258 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 02:40:10.165269 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 02:40:10.165277 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 02:40:10.165285 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 02:40:10.165293 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 02:40:10.165313 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 02:40:10.165322 | orchestrator | ++ export ARA=false 2026-02-19 02:40:10.165330 | orchestrator | ++ ARA=false 2026-02-19 02:40:10.165338 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 02:40:10.165346 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 02:40:10.165357 | orchestrator | ++ export TEMPEST=false 2026-02-19 02:40:10.165365 | orchestrator | ++ TEMPEST=false 2026-02-19 02:40:10.165374 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 02:40:10.165382 | orchestrator | ++ IS_ZUUL=true 2026-02-19 02:40:10.165390 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 02:40:10.165398 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 02:40:10.165480 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 02:40:10.165491 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 02:40:10.165499 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 02:40:10.165507 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 02:40:10.165609 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 02:40:10.165621 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 02:40:10.165629 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 02:40:10.165637 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 02:40:10.165645 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-19 02:40:10.222721 | orchestrator | + docker version 2026-02-19 02:40:10.330185 | orchestrator | Client: Docker Engine - Community 2026-02-19 02:40:10.330265 | orchestrator | Version: 27.5.1 2026-02-19 02:40:10.330282 | orchestrator | API version: 1.47 2026-02-19 02:40:10.330296 | orchestrator | Go version: go1.22.11 2026-02-19 02:40:10.330302 | orchestrator | Git commit: 9f9e405 2026-02-19 02:40:10.330309 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-19 02:40:10.330317 | orchestrator | OS/Arch: linux/amd64 2026-02-19 02:40:10.330323 | orchestrator | Context: default 2026-02-19 02:40:10.330331 | orchestrator | 2026-02-19 02:40:10.330335 | orchestrator | Server: Docker Engine - Community 2026-02-19 02:40:10.330339 | orchestrator | Engine: 2026-02-19 02:40:10.330344 | orchestrator | Version: 27.5.1 2026-02-19 02:40:10.330348 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-19 02:40:10.330372 | orchestrator | Go version: go1.22.11 2026-02-19 02:40:10.330386 | orchestrator | Git commit: 4c9b3b0 2026-02-19 02:40:10.330390 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-19 02:40:10.330393 | orchestrator | OS/Arch: linux/amd64 2026-02-19 02:40:10.330397 | orchestrator | Experimental: false 2026-02-19 02:40:10.330401 | orchestrator | containerd: 2026-02-19 02:40:10.330405 | orchestrator | Version: v2.2.1 2026-02-19 02:40:10.330409 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-19 02:40:10.330414 | orchestrator | runc: 2026-02-19 02:40:10.330420 | orchestrator | Version: 1.3.4 2026-02-19 02:40:10.330424 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-19 02:40:10.330428 | orchestrator | docker-init: 2026-02-19 02:40:10.330431 | orchestrator | Version: 0.19.0 2026-02-19 02:40:10.330436 | orchestrator | GitCommit: de40ad0 2026-02-19 02:40:10.332874 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-19 02:40:10.342091 | orchestrator | + set -e 2026-02-19 02:40:10.342961 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 02:40:10.342999 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 02:40:10.343007 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 02:40:10.343014 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 02:40:10.343020 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 02:40:10.343027 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 02:40:10.343035 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 02:40:10.343042 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 02:40:10.343049 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 02:40:10.343055 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 02:40:10.343062 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 02:40:10.343069 | orchestrator | ++ export ARA=false 2026-02-19 02:40:10.343076 | orchestrator | ++ ARA=false 2026-02-19 02:40:10.343082 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 02:40:10.343089 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 02:40:10.343095 | orchestrator | ++ export TEMPEST=false 2026-02-19 02:40:10.343102 | orchestrator | ++ TEMPEST=false 2026-02-19 02:40:10.343108 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 02:40:10.343115 | orchestrator | ++ IS_ZUUL=true 2026-02-19 02:40:10.343121 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 02:40:10.343127 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 02:40:10.343134 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 02:40:10.343140 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 02:40:10.343146 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 02:40:10.343153 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 02:40:10.343159 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 02:40:10.343166 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 02:40:10.343172 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 02:40:10.343179 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 02:40:10.343185 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 02:40:10.343191 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 02:40:10.343197 | orchestrator | ++ INTERACTIVE=false 2026-02-19 02:40:10.343204 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 02:40:10.343213 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 02:40:10.343219 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-19 02:40:10.343226 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-19 02:40:10.347215 | orchestrator | + set -e 2026-02-19 02:40:10.347263 | orchestrator | + VERSION=9.5.0 2026-02-19 02:40:10.347272 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-19 02:40:10.354290 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-19 02:40:10.354349 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-19 02:40:10.358699 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-19 02:40:10.363575 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-19 02:40:10.372049 | orchestrator | + set -e 2026-02-19 02:40:10.372138 | orchestrator | /opt/configuration ~ 2026-02-19 02:40:10.372151 | orchestrator | + pushd /opt/configuration 2026-02-19 02:40:10.372160 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-19 02:40:10.373591 | orchestrator | + source /opt/venv/bin/activate 2026-02-19 02:40:10.376219 | orchestrator | ++ deactivate nondestructive 2026-02-19 02:40:10.376297 | orchestrator | ++ '[' -n '' ']' 2026-02-19 02:40:10.376309 | orchestrator | ++ '[' -n '' ']' 2026-02-19 02:40:10.376358 | orchestrator | ++ hash -r 2026-02-19 02:40:10.376367 | orchestrator | ++ '[' -n '' ']' 2026-02-19 02:40:10.376371 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-19 02:40:10.376375 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-19 02:40:10.376379 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-19 02:40:10.376384 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-19 02:40:10.376388 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-19 02:40:10.376392 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-19 02:40:10.376396 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-19 02:40:10.376407 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 02:40:10.376412 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 02:40:10.376416 | orchestrator | ++ export PATH 2026-02-19 02:40:10.376420 | orchestrator | ++ '[' -n '' ']' 2026-02-19 02:40:10.376424 | orchestrator | ++ '[' -z '' ']' 2026-02-19 02:40:10.376428 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-19 02:40:10.376432 | orchestrator | ++ PS1='(venv) ' 2026-02-19 02:40:10.376435 | orchestrator | ++ export PS1 2026-02-19 02:40:10.376439 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-19 02:40:10.376443 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-19 02:40:10.376447 | orchestrator | ++ hash -r 2026-02-19 02:40:10.376451 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-19 02:40:11.351913 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-19 02:40:11.352716 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-19 02:40:11.353836 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-19 02:40:11.355136 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-19 02:40:11.356410 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-19 02:40:11.366610 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-19 02:40:11.368019 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-19 02:40:11.369047 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-19 02:40:11.370204 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-19 02:40:11.398894 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-19 02:40:11.400218 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-19 02:40:11.401835 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-19 02:40:11.403303 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-19 02:40:11.406940 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-19 02:40:11.600513 | orchestrator | ++ which gilt 2026-02-19 02:40:11.602906 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-19 02:40:11.602963 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-19 02:40:11.823497 | orchestrator | osism.cfg-generics: 2026-02-19 02:40:11.969830 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-19 02:40:11.970113 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-19 02:40:11.970178 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-19 02:40:11.970205 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-19 02:40:12.790456 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-19 02:40:12.800274 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-19 02:40:13.121793 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-19 02:40:13.174386 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-19 02:40:13.174461 | orchestrator | + deactivate 2026-02-19 02:40:13.174469 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-19 02:40:13.174475 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 02:40:13.174479 | orchestrator | + export PATH 2026-02-19 02:40:13.174484 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-19 02:40:13.174489 | orchestrator | + '[' -n '' ']' 2026-02-19 02:40:13.174495 | orchestrator | + hash -r 2026-02-19 02:40:13.174499 | orchestrator | + '[' -n '' ']' 2026-02-19 02:40:13.174503 | orchestrator | + unset VIRTUAL_ENV 2026-02-19 02:40:13.174514 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-19 02:40:13.174518 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-19 02:40:13.174522 | orchestrator | + unset -f deactivate 2026-02-19 02:40:13.174691 | orchestrator | ~ 2026-02-19 02:40:13.174698 | orchestrator | + popd 2026-02-19 02:40:13.176898 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-19 02:40:13.176951 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-19 02:40:13.177137 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-19 02:40:13.227300 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 02:40:13.227393 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-19 02:40:13.227405 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-19 02:40:13.279962 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-19 02:40:13.280080 | orchestrator | ++ semver 2024.2 2025.1 2026-02-19 02:40:13.333405 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-19 02:40:13.333498 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-19 02:40:13.418654 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-19 02:40:13.418745 | orchestrator | + source /opt/venv/bin/activate 2026-02-19 02:40:13.418755 | orchestrator | ++ deactivate nondestructive 2026-02-19 02:40:13.418763 | orchestrator | ++ '[' -n '' ']' 2026-02-19 02:40:13.418771 | orchestrator | ++ '[' -n '' ']' 2026-02-19 02:40:13.418777 | orchestrator | ++ hash -r 2026-02-19 02:40:13.418784 | orchestrator | ++ '[' -n '' ']' 2026-02-19 02:40:13.418790 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-19 02:40:13.418796 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-19 02:40:13.418802 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-19 02:40:13.418810 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-19 02:40:13.418816 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-19 02:40:13.418822 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-19 02:40:13.418827 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-19 02:40:13.418844 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 02:40:13.418871 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 02:40:13.418879 | orchestrator | ++ export PATH 2026-02-19 02:40:13.419228 | orchestrator | ++ '[' -n '' ']' 2026-02-19 02:40:13.419251 | orchestrator | ++ '[' -z '' ']' 2026-02-19 02:40:13.419257 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-19 02:40:13.419263 | orchestrator | ++ PS1='(venv) ' 2026-02-19 02:40:13.419269 | orchestrator | ++ export PS1 2026-02-19 02:40:13.419276 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-19 02:40:13.419282 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-19 02:40:13.419288 | orchestrator | ++ hash -r 2026-02-19 02:40:13.419295 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-19 02:40:14.455472 | orchestrator | 2026-02-19 02:40:14.455563 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-19 02:40:14.455573 | orchestrator | 2026-02-19 02:40:14.455581 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-19 02:40:14.993876 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:14.994094 | orchestrator | 2026-02-19 02:40:14.994118 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-19 02:40:15.942675 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:15.942786 | orchestrator | 2026-02-19 02:40:15.942805 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-19 02:40:15.942845 | orchestrator | 2026-02-19 02:40:15.942857 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:40:18.070547 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:18.070637 | orchestrator | 2026-02-19 02:40:18.070649 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-19 02:40:18.124011 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:18.124102 | orchestrator | 2026-02-19 02:40:18.124117 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-19 02:40:18.596318 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:18.596410 | orchestrator | 2026-02-19 02:40:18.596427 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-19 02:40:18.642419 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:40:18.642510 | orchestrator | 2026-02-19 02:40:18.642523 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-19 02:40:18.989598 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:18.989729 | orchestrator | 2026-02-19 02:40:18.989758 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-19 02:40:19.323758 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:19.323858 | orchestrator | 2026-02-19 02:40:19.323869 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-19 02:40:19.427178 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:40:19.427269 | orchestrator | 2026-02-19 02:40:19.427287 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-19 02:40:19.427301 | orchestrator | 2026-02-19 02:40:19.427314 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:40:21.083830 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:21.083939 | orchestrator | 2026-02-19 02:40:21.083958 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-19 02:40:21.182431 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-19 02:40:21.182529 | orchestrator | 2026-02-19 02:40:21.182545 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-19 02:40:21.245617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-19 02:40:21.245738 | orchestrator | 2026-02-19 02:40:21.245765 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-19 02:40:22.318736 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-19 02:40:22.318818 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-19 02:40:22.318828 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-19 02:40:22.318835 | orchestrator | 2026-02-19 02:40:22.318845 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-19 02:40:24.071495 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-19 02:40:24.071632 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-19 02:40:24.071660 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-19 02:40:24.071681 | orchestrator | 2026-02-19 02:40:24.071702 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-19 02:40:24.663177 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-19 02:40:24.663268 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:24.663280 | orchestrator | 2026-02-19 02:40:24.663290 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-19 02:40:25.312347 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-19 02:40:25.312421 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:25.312430 | orchestrator | 2026-02-19 02:40:25.312435 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-19 02:40:25.372958 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:40:25.373107 | orchestrator | 2026-02-19 02:40:25.373132 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-19 02:40:25.726606 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:25.726715 | orchestrator | 2026-02-19 02:40:25.726746 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-19 02:40:25.788837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-19 02:40:25.788951 | orchestrator | 2026-02-19 02:40:25.788974 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-19 02:40:26.862466 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:26.862609 | orchestrator | 2026-02-19 02:40:26.862625 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-19 02:40:27.658455 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:27.658557 | orchestrator | 2026-02-19 02:40:27.658573 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-19 02:40:37.381282 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:37.381383 | orchestrator | 2026-02-19 02:40:37.381395 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-19 02:40:37.438621 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:40:37.438727 | orchestrator | 2026-02-19 02:40:37.438770 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-19 02:40:37.438786 | orchestrator | 2026-02-19 02:40:37.438794 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:40:38.994562 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:38.994654 | orchestrator | 2026-02-19 02:40:38.994668 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-19 02:40:39.095735 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-19 02:40:39.095860 | orchestrator | 2026-02-19 02:40:39.095880 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-19 02:40:39.158911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 02:40:39.158987 | orchestrator | 2026-02-19 02:40:39.158996 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-19 02:40:41.128479 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:41.128566 | orchestrator | 2026-02-19 02:40:41.128581 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-19 02:40:41.159083 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:41.159199 | orchestrator | 2026-02-19 02:40:41.159222 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-19 02:40:41.253018 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-19 02:40:41.253103 | orchestrator | 2026-02-19 02:40:41.253111 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-19 02:40:43.771879 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-19 02:40:43.771983 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-19 02:40:43.771997 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-19 02:40:43.772009 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-19 02:40:43.772019 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-19 02:40:43.772030 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-19 02:40:43.772089 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-19 02:40:43.772100 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-19 02:40:43.772110 | orchestrator | 2026-02-19 02:40:43.772122 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-19 02:40:44.337136 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:44.337218 | orchestrator | 2026-02-19 02:40:44.337232 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-19 02:40:44.898584 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:44.898686 | orchestrator | 2026-02-19 02:40:44.898702 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-19 02:40:44.970409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-19 02:40:44.970522 | orchestrator | 2026-02-19 02:40:44.970539 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-19 02:40:46.138284 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-19 02:40:46.138354 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-19 02:40:46.138360 | orchestrator | 2026-02-19 02:40:46.138366 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-19 02:40:46.772314 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:46.772418 | orchestrator | 2026-02-19 02:40:46.772436 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-19 02:40:46.821615 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:40:46.821710 | orchestrator | 2026-02-19 02:40:46.821724 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-19 02:40:46.898995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-19 02:40:46.899148 | orchestrator | 2026-02-19 02:40:46.899165 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-19 02:40:47.519350 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:47.519461 | orchestrator | 2026-02-19 02:40:47.519477 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-19 02:40:47.581471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-19 02:40:47.581558 | orchestrator | 2026-02-19 02:40:47.581570 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-19 02:40:48.940695 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-19 02:40:48.940784 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-19 02:40:48.940795 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:48.940804 | orchestrator | 2026-02-19 02:40:48.940811 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-19 02:40:49.558531 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:49.558662 | orchestrator | 2026-02-19 02:40:49.558692 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-19 02:40:49.606408 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:40:49.606533 | orchestrator | 2026-02-19 02:40:49.606559 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-19 02:40:49.703697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-19 02:40:49.703796 | orchestrator | 2026-02-19 02:40:49.703812 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-19 02:40:50.236679 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:50.236782 | orchestrator | 2026-02-19 02:40:50.236797 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-19 02:40:50.632260 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:50.632350 | orchestrator | 2026-02-19 02:40:50.632363 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-19 02:40:51.886497 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-19 02:40:51.886604 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-19 02:40:51.886620 | orchestrator | 2026-02-19 02:40:51.886634 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-19 02:40:52.524988 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:52.525097 | orchestrator | 2026-02-19 02:40:52.525109 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-19 02:40:52.888659 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:52.888783 | orchestrator | 2026-02-19 02:40:52.888811 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-19 02:40:53.245431 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:53.245501 | orchestrator | 2026-02-19 02:40:53.245510 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-19 02:40:53.296374 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:40:53.296445 | orchestrator | 2026-02-19 02:40:53.296451 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-19 02:40:53.367244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-19 02:40:53.367367 | orchestrator | 2026-02-19 02:40:53.367383 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-19 02:40:53.424795 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:53.425010 | orchestrator | 2026-02-19 02:40:53.425037 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-19 02:40:55.429242 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-19 02:40:55.429316 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-19 02:40:55.429325 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-19 02:40:55.429330 | orchestrator | 2026-02-19 02:40:55.429336 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-19 02:40:56.131459 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:56.131544 | orchestrator | 2026-02-19 02:40:56.131555 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-19 02:40:56.853303 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:56.853404 | orchestrator | 2026-02-19 02:40:56.853421 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-19 02:40:57.540543 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:57.540650 | orchestrator | 2026-02-19 02:40:57.540668 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-19 02:40:57.614292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-19 02:40:57.614395 | orchestrator | 2026-02-19 02:40:57.614408 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-19 02:40:57.662443 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:57.662557 | orchestrator | 2026-02-19 02:40:57.662579 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-19 02:40:58.346424 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-19 02:40:58.346519 | orchestrator | 2026-02-19 02:40:58.346534 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-19 02:40:58.422376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-19 02:40:58.422498 | orchestrator | 2026-02-19 02:40:58.422525 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-19 02:40:59.125428 | orchestrator | changed: [testbed-manager] 2026-02-19 02:40:59.125532 | orchestrator | 2026-02-19 02:40:59.125545 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-19 02:40:59.698822 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:59.698951 | orchestrator | 2026-02-19 02:40:59.698977 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-19 02:40:59.741537 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:40:59.741614 | orchestrator | 2026-02-19 02:40:59.741624 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-19 02:40:59.803139 | orchestrator | ok: [testbed-manager] 2026-02-19 02:40:59.803231 | orchestrator | 2026-02-19 02:40:59.803245 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-19 02:41:00.600368 | orchestrator | changed: [testbed-manager] 2026-02-19 02:41:00.600486 | orchestrator | 2026-02-19 02:41:00.600516 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-19 02:42:02.390819 | orchestrator | changed: [testbed-manager] 2026-02-19 02:42:02.390934 | orchestrator | 2026-02-19 02:42:02.390950 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-19 02:42:03.370157 | orchestrator | ok: [testbed-manager] 2026-02-19 02:42:03.370373 | orchestrator | 2026-02-19 02:42:03.370401 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-19 02:42:03.428661 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:42:03.428777 | orchestrator | 2026-02-19 02:42:03.428802 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-19 02:42:05.786735 | orchestrator | changed: [testbed-manager] 2026-02-19 02:42:05.786854 | orchestrator | 2026-02-19 02:42:05.786877 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-19 02:42:05.854784 | orchestrator | ok: [testbed-manager] 2026-02-19 02:42:05.854878 | orchestrator | 2026-02-19 02:42:05.854898 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-19 02:42:05.854917 | orchestrator | 2026-02-19 02:42:05.854950 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-19 02:42:06.002575 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:42:06.002660 | orchestrator | 2026-02-19 02:42:06.002671 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-19 02:43:06.048749 | orchestrator | Pausing for 60 seconds 2026-02-19 02:43:06.048829 | orchestrator | changed: [testbed-manager] 2026-02-19 02:43:06.048835 | orchestrator | 2026-02-19 02:43:06.048841 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-19 02:43:09.091690 | orchestrator | changed: [testbed-manager] 2026-02-19 02:43:09.091783 | orchestrator | 2026-02-19 02:43:09.091794 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-19 02:44:11.205307 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-19 02:44:11.205390 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-19 02:44:11.205412 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-19 02:44:11.205417 | orchestrator | changed: [testbed-manager] 2026-02-19 02:44:11.205423 | orchestrator | 2026-02-19 02:44:11.205428 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-19 02:44:21.870553 | orchestrator | changed: [testbed-manager] 2026-02-19 02:44:21.870640 | orchestrator | 2026-02-19 02:44:21.870653 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-19 02:44:21.963987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-19 02:44:21.964074 | orchestrator | 2026-02-19 02:44:21.964085 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-19 02:44:21.964094 | orchestrator | 2026-02-19 02:44:21.964099 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-19 02:44:22.016814 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:44:22.016884 | orchestrator | 2026-02-19 02:44:22.016894 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-19 02:44:22.104682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-19 02:44:22.104756 | orchestrator | 2026-02-19 02:44:22.104765 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-19 02:44:22.890248 | orchestrator | changed: [testbed-manager] 2026-02-19 02:44:22.890334 | orchestrator | 2026-02-19 02:44:22.890343 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-19 02:44:26.100764 | orchestrator | ok: [testbed-manager] 2026-02-19 02:44:26.100842 | orchestrator | 2026-02-19 02:44:26.100852 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-19 02:44:26.177683 | orchestrator | ok: [testbed-manager] => { 2026-02-19 02:44:26.177762 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-19 02:44:26.177775 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-19 02:44:26.177782 | orchestrator | "Checking running containers against expected versions...", 2026-02-19 02:44:26.177791 | orchestrator | "", 2026-02-19 02:44:26.177799 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-19 02:44:26.177805 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-19 02:44:26.177812 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.177819 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-19 02:44:26.177825 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.177832 | orchestrator | "", 2026-02-19 02:44:26.177839 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-19 02:44:26.177868 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-19 02:44:26.177876 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.177882 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-19 02:44:26.177888 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.177895 | orchestrator | "", 2026-02-19 02:44:26.177901 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-19 02:44:26.177908 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-19 02:44:26.177915 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.177921 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-19 02:44:26.177928 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.177934 | orchestrator | "", 2026-02-19 02:44:26.177938 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-19 02:44:26.177943 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-19 02:44:26.177947 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.177951 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-19 02:44:26.177955 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.177959 | orchestrator | "", 2026-02-19 02:44:26.177965 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-19 02:44:26.177969 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-19 02:44:26.177973 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.177977 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-19 02:44:26.177980 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.177995 | orchestrator | "", 2026-02-19 02:44:26.177999 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-19 02:44:26.178003 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178007 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178011 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178049 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178053 | orchestrator | "", 2026-02-19 02:44:26.178058 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-19 02:44:26.178062 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-19 02:44:26.178065 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178070 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-19 02:44:26.178074 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178078 | orchestrator | "", 2026-02-19 02:44:26.178082 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-19 02:44:26.178086 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-19 02:44:26.178090 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178094 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-19 02:44:26.178098 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178102 | orchestrator | "", 2026-02-19 02:44:26.178106 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-19 02:44:26.178110 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-19 02:44:26.178114 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178118 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-19 02:44:26.178122 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178126 | orchestrator | "", 2026-02-19 02:44:26.178130 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-19 02:44:26.178134 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-19 02:44:26.178138 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178142 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-19 02:44:26.178145 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178149 | orchestrator | "", 2026-02-19 02:44:26.178153 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-19 02:44:26.178163 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178167 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178171 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178175 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178179 | orchestrator | "", 2026-02-19 02:44:26.178183 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-19 02:44:26.178186 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178190 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178194 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178198 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178203 | orchestrator | "", 2026-02-19 02:44:26.178207 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-19 02:44:26.178211 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178215 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178219 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178223 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178227 | orchestrator | "", 2026-02-19 02:44:26.178231 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-19 02:44:26.178235 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178239 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178243 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178259 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178264 | orchestrator | "", 2026-02-19 02:44:26.178268 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-19 02:44:26.178273 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178283 | orchestrator | " Enabled: true", 2026-02-19 02:44:26.178287 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-19 02:44:26.178292 | orchestrator | " Status: ✅ MATCH", 2026-02-19 02:44:26.178298 | orchestrator | "", 2026-02-19 02:44:26.178305 | orchestrator | "=== Summary ===", 2026-02-19 02:44:26.178312 | orchestrator | "Errors (version mismatches): 0", 2026-02-19 02:44:26.178318 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-19 02:44:26.178325 | orchestrator | "", 2026-02-19 02:44:26.178331 | orchestrator | "✅ All running containers match expected versions!" 2026-02-19 02:44:26.178338 | orchestrator | ] 2026-02-19 02:44:26.178344 | orchestrator | } 2026-02-19 02:44:26.178350 | orchestrator | 2026-02-19 02:44:26.178356 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-19 02:44:26.236247 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:44:26.236388 | orchestrator | 2026-02-19 02:44:26.236400 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:44:26.236408 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-19 02:44:26.236415 | orchestrator | 2026-02-19 02:44:26.333837 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-19 02:44:26.333915 | orchestrator | + deactivate 2026-02-19 02:44:26.333927 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-19 02:44:26.333936 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 02:44:26.333942 | orchestrator | + export PATH 2026-02-19 02:44:26.333949 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-19 02:44:26.333956 | orchestrator | + '[' -n '' ']' 2026-02-19 02:44:26.333963 | orchestrator | + hash -r 2026-02-19 02:44:26.333969 | orchestrator | + '[' -n '' ']' 2026-02-19 02:44:26.333976 | orchestrator | + unset VIRTUAL_ENV 2026-02-19 02:44:26.333980 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-19 02:44:26.333984 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-19 02:44:26.333988 | orchestrator | + unset -f deactivate 2026-02-19 02:44:26.333993 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-19 02:44:26.342212 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-19 02:44:26.342297 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-19 02:44:26.342331 | orchestrator | + local max_attempts=60 2026-02-19 02:44:26.342338 | orchestrator | + local name=ceph-ansible 2026-02-19 02:44:26.342343 | orchestrator | + local attempt_num=1 2026-02-19 02:44:26.342557 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 02:44:26.377932 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-19 02:44:26.378073 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-19 02:44:26.378090 | orchestrator | + local max_attempts=60 2026-02-19 02:44:26.378102 | orchestrator | + local name=kolla-ansible 2026-02-19 02:44:26.378111 | orchestrator | + local attempt_num=1 2026-02-19 02:44:26.378271 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-19 02:44:26.417310 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-19 02:44:26.417399 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-19 02:44:26.417411 | orchestrator | + local max_attempts=60 2026-02-19 02:44:26.417421 | orchestrator | + local name=osism-ansible 2026-02-19 02:44:26.417428 | orchestrator | + local attempt_num=1 2026-02-19 02:44:26.418115 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-19 02:44:26.456675 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-19 02:44:26.456751 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-19 02:44:26.456763 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-19 02:44:27.159582 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-19 02:44:27.332044 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-19 02:44:27.332147 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-19 02:44:27.332164 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-19 02:44:27.332177 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-19 02:44:27.332191 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-19 02:44:27.332225 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-19 02:44:27.332237 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-19 02:44:27.332248 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-19 02:44:27.332259 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-19 02:44:27.332271 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-19 02:44:27.332282 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-19 02:44:27.332293 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-19 02:44:27.332304 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-19 02:44:27.332340 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-19 02:44:27.332352 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-19 02:44:27.332364 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-19 02:44:27.338204 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-19 02:44:27.381266 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 02:44:27.381359 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-19 02:44:27.385160 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-19 02:44:39.683366 | orchestrator | 2026-02-19 02:44:39 | INFO  | Task da22ab6f-8387-4b65-8886-7a4f6676695a (resolvconf) was prepared for execution. 2026-02-19 02:44:39.683469 | orchestrator | 2026-02-19 02:44:39 | INFO  | It takes a moment until task da22ab6f-8387-4b65-8886-7a4f6676695a (resolvconf) has been started and output is visible here. 2026-02-19 02:44:52.435314 | orchestrator | 2026-02-19 02:44:52.435441 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-19 02:44:52.435463 | orchestrator | 2026-02-19 02:44:52.435476 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:44:52.435489 | orchestrator | Thursday 19 February 2026 02:44:43 +0000 (0:00:00.102) 0:00:00.102 ***** 2026-02-19 02:44:52.435500 | orchestrator | ok: [testbed-manager] 2026-02-19 02:44:52.435512 | orchestrator | 2026-02-19 02:44:52.435523 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-19 02:44:52.435535 | orchestrator | Thursday 19 February 2026 02:44:46 +0000 (0:00:03.399) 0:00:03.502 ***** 2026-02-19 02:44:52.435546 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:44:52.435559 | orchestrator | 2026-02-19 02:44:52.435570 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-19 02:44:52.435580 | orchestrator | Thursday 19 February 2026 02:44:46 +0000 (0:00:00.065) 0:00:03.567 ***** 2026-02-19 02:44:52.435659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-19 02:44:52.435674 | orchestrator | 2026-02-19 02:44:52.435686 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-19 02:44:52.435697 | orchestrator | Thursday 19 February 2026 02:44:46 +0000 (0:00:00.070) 0:00:03.638 ***** 2026-02-19 02:44:52.435726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 02:44:52.435738 | orchestrator | 2026-02-19 02:44:52.435749 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-19 02:44:52.435760 | orchestrator | Thursday 19 February 2026 02:44:47 +0000 (0:00:00.072) 0:00:03.711 ***** 2026-02-19 02:44:52.435771 | orchestrator | ok: [testbed-manager] 2026-02-19 02:44:52.435782 | orchestrator | 2026-02-19 02:44:52.435793 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-19 02:44:52.435804 | orchestrator | Thursday 19 February 2026 02:44:47 +0000 (0:00:00.915) 0:00:04.627 ***** 2026-02-19 02:44:52.435815 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:44:52.435826 | orchestrator | 2026-02-19 02:44:52.435839 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-19 02:44:52.435852 | orchestrator | Thursday 19 February 2026 02:44:47 +0000 (0:00:00.047) 0:00:04.674 ***** 2026-02-19 02:44:52.435887 | orchestrator | ok: [testbed-manager] 2026-02-19 02:44:52.435899 | orchestrator | 2026-02-19 02:44:52.435912 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-19 02:44:52.435925 | orchestrator | Thursday 19 February 2026 02:44:48 +0000 (0:00:00.449) 0:00:05.123 ***** 2026-02-19 02:44:52.435938 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:44:52.435950 | orchestrator | 2026-02-19 02:44:52.435962 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-19 02:44:52.435976 | orchestrator | Thursday 19 February 2026 02:44:48 +0000 (0:00:00.074) 0:00:05.198 ***** 2026-02-19 02:44:52.435988 | orchestrator | changed: [testbed-manager] 2026-02-19 02:44:52.436001 | orchestrator | 2026-02-19 02:44:52.436013 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-19 02:44:52.436025 | orchestrator | Thursday 19 February 2026 02:44:49 +0000 (0:00:00.520) 0:00:05.718 ***** 2026-02-19 02:44:52.436038 | orchestrator | changed: [testbed-manager] 2026-02-19 02:44:52.436050 | orchestrator | 2026-02-19 02:44:52.436062 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-19 02:44:52.436075 | orchestrator | Thursday 19 February 2026 02:44:50 +0000 (0:00:01.040) 0:00:06.759 ***** 2026-02-19 02:44:52.436087 | orchestrator | ok: [testbed-manager] 2026-02-19 02:44:52.436098 | orchestrator | 2026-02-19 02:44:52.436108 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-19 02:44:52.436119 | orchestrator | Thursday 19 February 2026 02:44:51 +0000 (0:00:00.947) 0:00:07.706 ***** 2026-02-19 02:44:52.436130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-19 02:44:52.436141 | orchestrator | 2026-02-19 02:44:52.436153 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-19 02:44:52.436172 | orchestrator | Thursday 19 February 2026 02:44:51 +0000 (0:00:00.085) 0:00:07.792 ***** 2026-02-19 02:44:52.436192 | orchestrator | changed: [testbed-manager] 2026-02-19 02:44:52.436211 | orchestrator | 2026-02-19 02:44:52.436232 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:44:52.436253 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 02:44:52.436273 | orchestrator | 2026-02-19 02:44:52.436292 | orchestrator | 2026-02-19 02:44:52.436303 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:44:52.436314 | orchestrator | Thursday 19 February 2026 02:44:52 +0000 (0:00:01.129) 0:00:08.922 ***** 2026-02-19 02:44:52.436325 | orchestrator | =============================================================================== 2026-02-19 02:44:52.436335 | orchestrator | Gathering Facts --------------------------------------------------------- 3.40s 2026-02-19 02:44:52.436346 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2026-02-19 02:44:52.436356 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2026-02-19 02:44:52.436367 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-02-19 02:44:52.436377 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.92s 2026-02-19 02:44:52.436388 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2026-02-19 02:44:52.436418 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.45s 2026-02-19 02:44:52.436429 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-02-19 02:44:52.436440 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-02-19 02:44:52.436450 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-02-19 02:44:52.436461 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-02-19 02:44:52.436471 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-19 02:44:52.436491 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-02-19 02:44:52.714277 | orchestrator | + osism apply sshconfig 2026-02-19 02:45:04.763670 | orchestrator | 2026-02-19 02:45:04 | INFO  | Task b2d0dccb-3403-4f72-8c25-9181034ab9c4 (sshconfig) was prepared for execution. 2026-02-19 02:45:04.763782 | orchestrator | 2026-02-19 02:45:04 | INFO  | It takes a moment until task b2d0dccb-3403-4f72-8c25-9181034ab9c4 (sshconfig) has been started and output is visible here. 2026-02-19 02:45:16.591156 | orchestrator | 2026-02-19 02:45:16.591316 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-19 02:45:16.591349 | orchestrator | 2026-02-19 02:45:16.591414 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-19 02:45:16.591475 | orchestrator | Thursday 19 February 2026 02:45:09 +0000 (0:00:00.163) 0:00:00.163 ***** 2026-02-19 02:45:16.591490 | orchestrator | ok: [testbed-manager] 2026-02-19 02:45:16.591502 | orchestrator | 2026-02-19 02:45:16.591514 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-19 02:45:16.591526 | orchestrator | Thursday 19 February 2026 02:45:09 +0000 (0:00:00.537) 0:00:00.701 ***** 2026-02-19 02:45:16.591537 | orchestrator | changed: [testbed-manager] 2026-02-19 02:45:16.591549 | orchestrator | 2026-02-19 02:45:16.591560 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-19 02:45:16.591571 | orchestrator | Thursday 19 February 2026 02:45:10 +0000 (0:00:00.522) 0:00:01.223 ***** 2026-02-19 02:45:16.591582 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-19 02:45:16.591593 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-19 02:45:16.591604 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-19 02:45:16.591616 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-19 02:45:16.591626 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-19 02:45:16.591637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-19 02:45:16.591648 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-19 02:45:16.591687 | orchestrator | 2026-02-19 02:45:16.591699 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-19 02:45:16.591709 | orchestrator | Thursday 19 February 2026 02:45:15 +0000 (0:00:05.632) 0:00:06.855 ***** 2026-02-19 02:45:16.591720 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:45:16.591731 | orchestrator | 2026-02-19 02:45:16.591742 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-19 02:45:16.591752 | orchestrator | Thursday 19 February 2026 02:45:15 +0000 (0:00:00.070) 0:00:06.926 ***** 2026-02-19 02:45:16.591763 | orchestrator | changed: [testbed-manager] 2026-02-19 02:45:16.591774 | orchestrator | 2026-02-19 02:45:16.591785 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:45:16.591797 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 02:45:16.591808 | orchestrator | 2026-02-19 02:45:16.591819 | orchestrator | 2026-02-19 02:45:16.591830 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:45:16.591841 | orchestrator | Thursday 19 February 2026 02:45:16 +0000 (0:00:00.558) 0:00:07.484 ***** 2026-02-19 02:45:16.591852 | orchestrator | =============================================================================== 2026-02-19 02:45:16.591863 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.63s 2026-02-19 02:45:16.591873 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-02-19 02:45:16.591884 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-02-19 02:45:16.591895 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-02-19 02:45:16.591906 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-02-19 02:45:16.873728 | orchestrator | + osism apply known-hosts 2026-02-19 02:45:28.933055 | orchestrator | 2026-02-19 02:45:28 | INFO  | Task 05ca1c65-57d4-422a-92ae-bcb84dc2dab4 (known-hosts) was prepared for execution. 2026-02-19 02:45:28.933179 | orchestrator | 2026-02-19 02:45:28 | INFO  | It takes a moment until task 05ca1c65-57d4-422a-92ae-bcb84dc2dab4 (known-hosts) has been started and output is visible here. 2026-02-19 02:45:45.666100 | orchestrator | 2026-02-19 02:45:45.666215 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-19 02:45:45.666233 | orchestrator | 2026-02-19 02:45:45.666246 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-19 02:45:45.666258 | orchestrator | Thursday 19 February 2026 02:45:32 +0000 (0:00:00.170) 0:00:00.170 ***** 2026-02-19 02:45:45.666270 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-19 02:45:45.666282 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-19 02:45:45.666293 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-19 02:45:45.666304 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-19 02:45:45.666315 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-19 02:45:45.666326 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-19 02:45:45.666337 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-19 02:45:45.666348 | orchestrator | 2026-02-19 02:45:45.666359 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-19 02:45:45.666371 | orchestrator | Thursday 19 February 2026 02:45:38 +0000 (0:00:05.983) 0:00:06.154 ***** 2026-02-19 02:45:45.666383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-19 02:45:45.666396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-19 02:45:45.666407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-19 02:45:45.666418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-19 02:45:45.666429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-19 02:45:45.666451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-19 02:45:45.666463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-19 02:45:45.666474 | orchestrator | 2026-02-19 02:45:45.666485 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:45.666496 | orchestrator | Thursday 19 February 2026 02:45:39 +0000 (0:00:00.191) 0:00:06.346 ***** 2026-02-19 02:45:45.666508 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFRFWhv3Y0iCItAMyOzsDZurMqy/3NqFWmmedhpBr0C8ZIhisX7oiPUOf3YHEXqD3oX7wfN0yL2cbkwp+OMeHzE=) 2026-02-19 02:45:45.666529 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXocBNDYrP4RyE5zqUaFuEPXA/fTAHh3c6aIu/LaoB6hMYJKe/2hewjYLeRSsjCKL4NWXRxKAhVVV6z+J3ILPJAruH6DuU2fFEpCRJ7D+88y+5UU/P26s8JJhuREx3p/prbl3aLDI0PzidOeCI1dfATHTjO0UScr3ai2UVsIyP1o8HHnm0u5s9L+TUxlwc4KbKggUNeWPo8xV/depE5+143FcHkdDHJgFIJU500uP853QUzwSvitVb4aUp/u4Wi+UjZAuaZq2iuRqt6UxhieH5hpwps5Stg91widKcCrK+pihBMDQo4BBmmKe95A7CQpm5XDXA+KDeMZpnAfJDtW63IToJTzLzWjFMaDQLFOYJyOTbCoElaDQaosA3fHSPMX8GlOouGPfUgvC1a3eWmlCyC84qBKsHw8LI0/RmT0rbJsMm6gKEvb3t0gJ0ktW1XV293Hie+0NQ9P/maFuOTdpZOCNr4d11zywOC7kUFN318vB3FfoQXdWUq+yYE4KtcMc=) 2026-02-19 02:45:45.666565 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINcavBQzbvzrj9HTWiOvUf8q5AVbu8P/mYF8vxmMZVBm) 2026-02-19 02:45:45.666583 | orchestrator | 2026-02-19 02:45:45.666602 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:45.666620 | orchestrator | Thursday 19 February 2026 02:45:40 +0000 (0:00:01.180) 0:00:07.526 ***** 2026-02-19 02:45:45.666663 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMK1KSDX1Ib5kNoOTSdlpXhM/l/k7x6fYJCFbwTIBdMoHV5j9NmZRgHZ2dDg/6vYuhqkadWKANlGGFTk5lB+ppOHsp4GEWeoN8T4ujYHMbVRqOuJYxGxwo4RJFXwOWeMyBI76S6lEYH4vW8Cwu0x4yIA7R9nLm0nDIaKgxHwH0nisKnTxeItQwQqTBN4H2fmwjfNaskHrhaPzr8R4XpMUd2TpdbANsUwDfZ1SRisCKNbjPA86IvnX12hS3zutC7uRtTMl/zwTJey2+1yE1uOr3nF8hIW9V0b5YjHe/WSzT7HdLlyyQjIcsCOWo6q2PKM2imSCOtnCocWASwp4f94lxSOLkb2YBN/8geXGEqd7QtAQply42ecnUOCOw6lijA0KpXTeWcTad5QOa3LxLMJJmo57jOHVIJxWqRKXXoJN1UM62ljhC3kF2b3TqvTsS/L0VrwkrWFTWxAT75k6lCpCJOPuBbK9fWn2b0a5WGpgezww78w5jWBF4fTwLhWMuloc=) 2026-02-19 02:45:45.666685 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFI5kMK55Muk36qDb5wfAu0HUJ6tfr9L920OzThlQgE3r1AA8udfOnY5SzAdRzktV6GcYPSOv03OglnPUWb5BNU=) 2026-02-19 02:45:45.666706 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKCLbP1/DnziPSXrnGyBg1Ce+kcgH8qTC792AkouipZh) 2026-02-19 02:45:45.666725 | orchestrator | 2026-02-19 02:45:45.666772 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:45.666787 | orchestrator | Thursday 19 February 2026 02:45:41 +0000 (0:00:01.021) 0:00:08.548 ***** 2026-02-19 02:45:45.666800 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdrZmiDz8QjAx1pjIvGdN5lnGclpwcUtysagQm57PdeBvoyroJkZ4dfp5xz865y0sacvN8DykpcNPp9eqKfKL5dD6dluSadj/lodqlkcLHN8WnsbTGm0TnBJwct4byQa4ZRHiHPKqCqWzNxQDUlvxTw2AJmogvzMRE0xtMdyqtrFv5LoE8JFueoPbR27KAyNwD5pJzoBTLmsWqnpiztu7l+IDkZvh0Rx1qOOWtweYiZB4w6gr+mTO7QGq/6ehE3F8e6I99zEHMF0IYBTPMspycBFoEu1czw4sjVabe6bMQnkpiI2ETg5WF/9qOgOfpHtVlAiHb+EEM+WCOdslgnP8Hlc/1fFPGgyFsCFs7+fNq4k94uG4fvx1ym5f9u6w9yBoiINItr/KzwjV78MMOxr9SiTLBY8qwejGf6DfLWyK+uXHokZXGglN8jQGA8zg/fa2+jTazsijOwmjI+m88kqdo1kk5oPVJIgVKEi3xfA/26e/ZAMGJBqep4AOkYntqu50=) 2026-02-19 02:45:45.666813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/Ex1P4mYvmFWPZ67BNSYZsC7aipfbNREjEzR4Ad1Jto7vJy++Y7q5Y4l6Km1W20fU4XJ3Y24uSbg5op48qot8=) 2026-02-19 02:45:45.666826 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF3606EmXk5cUBzHTxb5x84MP1Fu+EBL4A4EonV4O9no) 2026-02-19 02:45:45.666838 | orchestrator | 2026-02-19 02:45:45.666851 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:45.666863 | orchestrator | Thursday 19 February 2026 02:45:42 +0000 (0:00:01.036) 0:00:09.585 ***** 2026-02-19 02:45:45.666876 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+BsUfStWhDQ/YuoLVFfkJ1D9CA3nV38LRcfFjpCzNE) 2026-02-19 02:45:45.666889 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2y7kxZ+0hOSPGw3jX6KpHqfAZRZb9OIW0OoDMoegq0atm1gbJlLVfQmQzfOBFSLd1XPQVZT/2dpzT0AqTuo7ZE3XVtUV4DxyLnOiseG+N2uFz9sit77MYjwyDWDpM+R6mFNsnj3c7IuWn4oYLaTZ98yv+Rt+4m0G3ZcWAQnYorj4B5TfJz5THcbba/wVY4/hB5p+TL+ceDIjMkvLqJ2P/6Us3O5uFLZPmEfgxcezHijOqfaWeZmwQEKo7oUva/+3miffmfubht/yd96JQaAWb2g4gCm4tw/+w/+w6dp8NAnsWKnJGnyTJtz9ZL8jwaVZdrjZ0sblMb/nvCtyzw2hCAe/k2R2SSqGdBNp01uJBEY44+rTly+ejSBpK4975x0sZlo3FlnCwkaCfkNrtUkjTtX5PP9AscshPb8OJanTVIBwO3WYhXI1DBJVSNeBjz3ZtzdkfZYUtSqZjpjHXERiaKc5/TG303jARHmvYzRnx3MBa872NSnhWkEJIb9FOk6E=) 2026-02-19 02:45:45.666912 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOpDcQxuPElmfuWjR7VgwU70MEMsy83wu10WQGEPSATs17aI3T7sbI3wEnM8kOrnbSCyI0pDgji/mPSI9m/khc=) 2026-02-19 02:45:45.666924 | orchestrator | 2026-02-19 02:45:45.666938 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:45.666951 | orchestrator | Thursday 19 February 2026 02:45:43 +0000 (0:00:01.141) 0:00:10.726 ***** 2026-02-19 02:45:45.667040 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAv7RGg48XE9RwE4Gf2JUvYZ5g5JqhyQ3a7nOL/WRkIrZFXVg/hwuyQbCeBlyjC8sl9LH7PW7NEdKwQ3IxTqqPo=) 2026-02-19 02:45:45.667053 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINnkX9FWbv1oMKnxtniYweFBh4QaG0KVM+ckSLt642fa) 2026-02-19 02:45:45.667064 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuaca2+aOOq2Jy04jBe5f705GGKZRcjr1W3d/IcC62kLJFdMSn9zYF8zeXLuOilDb4U9Ii+PJ4JeN3yvxnr76lKIj1v3W03P7nWVeUBYWqx2iwB22xy9at86cjpmz5+dUMPyaXsnEWt1Tg6ay1EKoABnYBEbgTrcGd1DtjzN6WHk353VPqVwAPrfz7MgHfG+6CdGleBiu0QjRuogFbva96Z7dxIvrIOA2Dr8ma4OVwElhMhytvcWIGM5mfh38saXTIACymBIqhnBfCAjcA9eMo8ucQkLAjXwv3nZPlODIN8iSGvHjg23uXKFwA2ZD8fnq+MrCisIzZq64TUpB0g9hd0cFoRrsa5S80Xm2mOyyKJ5ActpgUdmExaepr+HvzmNUdX2wCltOSJZIRL0FcsNGoRjn/hzDIPuvBXfeilHvVhFrPqcI+FZRDuRUKgJvd/N76vI/TpMwYujbJV69rqKBHCJzFZwYIin5q2PC5xdaLwCTl0Bhb86oyHLCenZSeg2M=) 2026-02-19 02:45:45.667076 | orchestrator | 2026-02-19 02:45:45.667087 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:45.667098 | orchestrator | Thursday 19 February 2026 02:45:44 +0000 (0:00:01.122) 0:00:11.848 ***** 2026-02-19 02:45:45.667120 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCl8Lsvm/aoJyboOqoy0+viIdyidd5ZSamoxxBO9GrbWWPq9+HO/urawhUze5kv65zElpplwUa314IwmBeVrd1v2DINj8puRiOP3+37kM9jmZstGQn51ZfwkX4yQCvxjgTGHA8hTo37NLbDPOULnLIJcHTcsDaQ35mYuZ6BmnzIVI1HpPXLl1mF5FAllH145La0pn4mKVmUMxEheF2qfRxZyjq3LBgEiHjSUb0PS9AMAmjspCVj5DzjpLLrRslbUVn1ZzRkE+pKWAITrk25hzesvNi+F2o/l4fkI8LRgTJ344KSJBT3BniEUvrrrET2/wqUEwYmGZ5U1oZGLI8INUuGzBZt6NUJ4CjJZInmUsYTWGITlstdB8bGgc1NwqxEbeHxo7t63nUKMEMBLUCfvPdAqSKTWsT3ukYj3Kk0JUbJ+uv+RPmgACe8rPuhSgUJ/G+IwXL0oAwx68KwJR6AUpD9SknFH6id0n42tJ3rbWp/VC9HeMvouioDxxvVYY5BNFM=) 2026-02-19 02:45:57.157102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAWvms46ImPdNjQ7nghA8SaKqTp9Isr1JjXF4n44lSearh13DAw0uM8LToScPuRRSbN8J0WTD6VwAED4YnzPXMI=) 2026-02-19 02:45:57.157217 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHfzcAf/1hpctTh8bjCSYc9d1d6QT1titwMEVJCnECnj) 2026-02-19 02:45:57.157235 | orchestrator | 2026-02-19 02:45:57.157250 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:57.157266 | orchestrator | Thursday 19 February 2026 02:45:45 +0000 (0:00:01.054) 0:00:12.902 ***** 2026-02-19 02:45:57.157285 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEVDa+DtF+XydJGczf/uJ0d1b2o0v3VArZ+gjHLDzdCP) 2026-02-19 02:45:57.157305 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDg7SBxHqI/7c6N19xp0rAqeY4Rjg9ODRFZJEOOgNZx/hxjH50WsEP9YiwYjbvwHIs1QehzTSqD0Fp8/fU2cFNh3xBrzuPFGOXmYJ6leCajYHfYiE+qlN+LZcvjjoVO8RPknOR/lMLtQkJZ5FcyAmR1MtSmHGI99uUJMhQhWgOfuVr1MS6TmQ5sdsgk9AkkcP1x8H3pcu+IfYC98H6Bi4FiT0NLKTUDNvpBfM4IScIcn5qMBHl3dtacASAnjchs9NttDeWOBWAKvi2aMi8d4wiQSDGLJY9nLJfMapUm9oxbdILwcXrbqSS6rtKRIYkVPQy6wv7NsNX+Tys0mIjKGhkkYMs7Xf9m45HzShjNZ6FoWzHgse5oLUAAQdxnUJuWkKXGYzZl5Y1Z4EP5zeXie6TDWwhuXMND024maUwrx4lIxDdXIPehlhZEYv6sgwt8lSv+fqF+q32P1XcjZ4Bu7mfPHMTABHs8ySzWWq8keDq7hwHPoPJ0zm8jkKsleIveY/s=) 2026-02-19 02:45:57.157356 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIPhbpnczCY1QBmLijDR1ZVUytM9RBRDLKdzTpATrulrfO55alSJ57VZDvtmp8HdYtjSeqwtpJNuQfkMyNeexCA=) 2026-02-19 02:45:57.157373 | orchestrator | 2026-02-19 02:45:57.157385 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-19 02:45:57.157397 | orchestrator | Thursday 19 February 2026 02:45:46 +0000 (0:00:01.110) 0:00:14.013 ***** 2026-02-19 02:45:57.157408 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-19 02:45:57.157420 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-19 02:45:57.157430 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-19 02:45:57.157441 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-19 02:45:57.157452 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-19 02:45:57.157462 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-19 02:45:57.157473 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-19 02:45:57.157484 | orchestrator | 2026-02-19 02:45:57.157495 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-19 02:45:57.157507 | orchestrator | Thursday 19 February 2026 02:45:52 +0000 (0:00:05.501) 0:00:19.514 ***** 2026-02-19 02:45:57.157518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-19 02:45:57.157531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-19 02:45:57.157542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-19 02:45:57.157553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-19 02:45:57.157564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-19 02:45:57.157575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-19 02:45:57.157586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-19 02:45:57.157596 | orchestrator | 2026-02-19 02:45:57.157607 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:57.157618 | orchestrator | Thursday 19 February 2026 02:45:52 +0000 (0:00:00.184) 0:00:19.698 ***** 2026-02-19 02:45:57.157629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFRFWhv3Y0iCItAMyOzsDZurMqy/3NqFWmmedhpBr0C8ZIhisX7oiPUOf3YHEXqD3oX7wfN0yL2cbkwp+OMeHzE=) 2026-02-19 02:45:57.157701 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXocBNDYrP4RyE5zqUaFuEPXA/fTAHh3c6aIu/LaoB6hMYJKe/2hewjYLeRSsjCKL4NWXRxKAhVVV6z+J3ILPJAruH6DuU2fFEpCRJ7D+88y+5UU/P26s8JJhuREx3p/prbl3aLDI0PzidOeCI1dfATHTjO0UScr3ai2UVsIyP1o8HHnm0u5s9L+TUxlwc4KbKggUNeWPo8xV/depE5+143FcHkdDHJgFIJU500uP853QUzwSvitVb4aUp/u4Wi+UjZAuaZq2iuRqt6UxhieH5hpwps5Stg91widKcCrK+pihBMDQo4BBmmKe95A7CQpm5XDXA+KDeMZpnAfJDtW63IToJTzLzWjFMaDQLFOYJyOTbCoElaDQaosA3fHSPMX8GlOouGPfUgvC1a3eWmlCyC84qBKsHw8LI0/RmT0rbJsMm6gKEvb3t0gJ0ktW1XV293Hie+0NQ9P/maFuOTdpZOCNr4d11zywOC7kUFN318vB3FfoQXdWUq+yYE4KtcMc=) 2026-02-19 02:45:57.157737 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINcavBQzbvzrj9HTWiOvUf8q5AVbu8P/mYF8vxmMZVBm) 2026-02-19 02:45:57.157757 | orchestrator | 2026-02-19 02:45:57.157852 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:57.157874 | orchestrator | Thursday 19 February 2026 02:45:53 +0000 (0:00:01.194) 0:00:20.892 ***** 2026-02-19 02:45:57.157895 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMK1KSDX1Ib5kNoOTSdlpXhM/l/k7x6fYJCFbwTIBdMoHV5j9NmZRgHZ2dDg/6vYuhqkadWKANlGGFTk5lB+ppOHsp4GEWeoN8T4ujYHMbVRqOuJYxGxwo4RJFXwOWeMyBI76S6lEYH4vW8Cwu0x4yIA7R9nLm0nDIaKgxHwH0nisKnTxeItQwQqTBN4H2fmwjfNaskHrhaPzr8R4XpMUd2TpdbANsUwDfZ1SRisCKNbjPA86IvnX12hS3zutC7uRtTMl/zwTJey2+1yE1uOr3nF8hIW9V0b5YjHe/WSzT7HdLlyyQjIcsCOWo6q2PKM2imSCOtnCocWASwp4f94lxSOLkb2YBN/8geXGEqd7QtAQply42ecnUOCOw6lijA0KpXTeWcTad5QOa3LxLMJJmo57jOHVIJxWqRKXXoJN1UM62ljhC3kF2b3TqvTsS/L0VrwkrWFTWxAT75k6lCpCJOPuBbK9fWn2b0a5WGpgezww78w5jWBF4fTwLhWMuloc=) 2026-02-19 02:45:57.157915 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFI5kMK55Muk36qDb5wfAu0HUJ6tfr9L920OzThlQgE3r1AA8udfOnY5SzAdRzktV6GcYPSOv03OglnPUWb5BNU=) 2026-02-19 02:45:57.157936 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKCLbP1/DnziPSXrnGyBg1Ce+kcgH8qTC792AkouipZh) 2026-02-19 02:45:57.157970 | orchestrator | 2026-02-19 02:45:57.157989 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:57.158006 | orchestrator | Thursday 19 February 2026 02:45:54 +0000 (0:00:01.211) 0:00:22.104 ***** 2026-02-19 02:45:57.158076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/Ex1P4mYvmFWPZ67BNSYZsC7aipfbNREjEzR4Ad1Jto7vJy++Y7q5Y4l6Km1W20fU4XJ3Y24uSbg5op48qot8=) 2026-02-19 02:45:57.158088 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdrZmiDz8QjAx1pjIvGdN5lnGclpwcUtysagQm57PdeBvoyroJkZ4dfp5xz865y0sacvN8DykpcNPp9eqKfKL5dD6dluSadj/lodqlkcLHN8WnsbTGm0TnBJwct4byQa4ZRHiHPKqCqWzNxQDUlvxTw2AJmogvzMRE0xtMdyqtrFv5LoE8JFueoPbR27KAyNwD5pJzoBTLmsWqnpiztu7l+IDkZvh0Rx1qOOWtweYiZB4w6gr+mTO7QGq/6ehE3F8e6I99zEHMF0IYBTPMspycBFoEu1czw4sjVabe6bMQnkpiI2ETg5WF/9qOgOfpHtVlAiHb+EEM+WCOdslgnP8Hlc/1fFPGgyFsCFs7+fNq4k94uG4fvx1ym5f9u6w9yBoiINItr/KzwjV78MMOxr9SiTLBY8qwejGf6DfLWyK+uXHokZXGglN8jQGA8zg/fa2+jTazsijOwmjI+m88kqdo1kk5oPVJIgVKEi3xfA/26e/ZAMGJBqep4AOkYntqu50=) 2026-02-19 02:45:57.158100 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF3606EmXk5cUBzHTxb5x84MP1Fu+EBL4A4EonV4O9no) 2026-02-19 02:45:57.158111 | orchestrator | 2026-02-19 02:45:57.158121 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:45:57.158133 | orchestrator | Thursday 19 February 2026 02:45:56 +0000 (0:00:01.174) 0:00:23.278 ***** 2026-02-19 02:45:57.158144 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2y7kxZ+0hOSPGw3jX6KpHqfAZRZb9OIW0OoDMoegq0atm1gbJlLVfQmQzfOBFSLd1XPQVZT/2dpzT0AqTuo7ZE3XVtUV4DxyLnOiseG+N2uFz9sit77MYjwyDWDpM+R6mFNsnj3c7IuWn4oYLaTZ98yv+Rt+4m0G3ZcWAQnYorj4B5TfJz5THcbba/wVY4/hB5p+TL+ceDIjMkvLqJ2P/6Us3O5uFLZPmEfgxcezHijOqfaWeZmwQEKo7oUva/+3miffmfubht/yd96JQaAWb2g4gCm4tw/+w/+w6dp8NAnsWKnJGnyTJtz9ZL8jwaVZdrjZ0sblMb/nvCtyzw2hCAe/k2R2SSqGdBNp01uJBEY44+rTly+ejSBpK4975x0sZlo3FlnCwkaCfkNrtUkjTtX5PP9AscshPb8OJanTVIBwO3WYhXI1DBJVSNeBjz3ZtzdkfZYUtSqZjpjHXERiaKc5/TG303jARHmvYzRnx3MBa872NSnhWkEJIb9FOk6E=) 2026-02-19 02:45:57.158155 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOpDcQxuPElmfuWjR7VgwU70MEMsy83wu10WQGEPSATs17aI3T7sbI3wEnM8kOrnbSCyI0pDgji/mPSI9m/khc=) 2026-02-19 02:45:57.158186 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+BsUfStWhDQ/YuoLVFfkJ1D9CA3nV38LRcfFjpCzNE) 2026-02-19 02:46:01.914955 | orchestrator | 2026-02-19 02:46:01.915054 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:46:01.915066 | orchestrator | Thursday 19 February 2026 02:45:57 +0000 (0:00:01.115) 0:00:24.394 ***** 2026-02-19 02:46:01.915076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuaca2+aOOq2Jy04jBe5f705GGKZRcjr1W3d/IcC62kLJFdMSn9zYF8zeXLuOilDb4U9Ii+PJ4JeN3yvxnr76lKIj1v3W03P7nWVeUBYWqx2iwB22xy9at86cjpmz5+dUMPyaXsnEWt1Tg6ay1EKoABnYBEbgTrcGd1DtjzN6WHk353VPqVwAPrfz7MgHfG+6CdGleBiu0QjRuogFbva96Z7dxIvrIOA2Dr8ma4OVwElhMhytvcWIGM5mfh38saXTIACymBIqhnBfCAjcA9eMo8ucQkLAjXwv3nZPlODIN8iSGvHjg23uXKFwA2ZD8fnq+MrCisIzZq64TUpB0g9hd0cFoRrsa5S80Xm2mOyyKJ5ActpgUdmExaepr+HvzmNUdX2wCltOSJZIRL0FcsNGoRjn/hzDIPuvBXfeilHvVhFrPqcI+FZRDuRUKgJvd/N76vI/TpMwYujbJV69rqKBHCJzFZwYIin5q2PC5xdaLwCTl0Bhb86oyHLCenZSeg2M=) 2026-02-19 02:46:01.915086 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAv7RGg48XE9RwE4Gf2JUvYZ5g5JqhyQ3a7nOL/WRkIrZFXVg/hwuyQbCeBlyjC8sl9LH7PW7NEdKwQ3IxTqqPo=) 2026-02-19 02:46:01.915095 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINnkX9FWbv1oMKnxtniYweFBh4QaG0KVM+ckSLt642fa) 2026-02-19 02:46:01.915103 | orchestrator | 2026-02-19 02:46:01.915111 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:46:01.915118 | orchestrator | Thursday 19 February 2026 02:45:58 +0000 (0:00:01.129) 0:00:25.523 ***** 2026-02-19 02:46:01.915159 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHfzcAf/1hpctTh8bjCSYc9d1d6QT1titwMEVJCnECnj) 2026-02-19 02:46:01.915168 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCl8Lsvm/aoJyboOqoy0+viIdyidd5ZSamoxxBO9GrbWWPq9+HO/urawhUze5kv65zElpplwUa314IwmBeVrd1v2DINj8puRiOP3+37kM9jmZstGQn51ZfwkX4yQCvxjgTGHA8hTo37NLbDPOULnLIJcHTcsDaQ35mYuZ6BmnzIVI1HpPXLl1mF5FAllH145La0pn4mKVmUMxEheF2qfRxZyjq3LBgEiHjSUb0PS9AMAmjspCVj5DzjpLLrRslbUVn1ZzRkE+pKWAITrk25hzesvNi+F2o/l4fkI8LRgTJ344KSJBT3BniEUvrrrET2/wqUEwYmGZ5U1oZGLI8INUuGzBZt6NUJ4CjJZInmUsYTWGITlstdB8bGgc1NwqxEbeHxo7t63nUKMEMBLUCfvPdAqSKTWsT3ukYj3Kk0JUbJ+uv+RPmgACe8rPuhSgUJ/G+IwXL0oAwx68KwJR6AUpD9SknFH6id0n42tJ3rbWp/VC9HeMvouioDxxvVYY5BNFM=) 2026-02-19 02:46:01.915175 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAWvms46ImPdNjQ7nghA8SaKqTp9Isr1JjXF4n44lSearh13DAw0uM8LToScPuRRSbN8J0WTD6VwAED4YnzPXMI=) 2026-02-19 02:46:01.915183 | orchestrator | 2026-02-19 02:46:01.915190 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-19 02:46:01.915197 | orchestrator | Thursday 19 February 2026 02:45:59 +0000 (0:00:01.116) 0:00:26.640 ***** 2026-02-19 02:46:01.915220 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDg7SBxHqI/7c6N19xp0rAqeY4Rjg9ODRFZJEOOgNZx/hxjH50WsEP9YiwYjbvwHIs1QehzTSqD0Fp8/fU2cFNh3xBrzuPFGOXmYJ6leCajYHfYiE+qlN+LZcvjjoVO8RPknOR/lMLtQkJZ5FcyAmR1MtSmHGI99uUJMhQhWgOfuVr1MS6TmQ5sdsgk9AkkcP1x8H3pcu+IfYC98H6Bi4FiT0NLKTUDNvpBfM4IScIcn5qMBHl3dtacASAnjchs9NttDeWOBWAKvi2aMi8d4wiQSDGLJY9nLJfMapUm9oxbdILwcXrbqSS6rtKRIYkVPQy6wv7NsNX+Tys0mIjKGhkkYMs7Xf9m45HzShjNZ6FoWzHgse5oLUAAQdxnUJuWkKXGYzZl5Y1Z4EP5zeXie6TDWwhuXMND024maUwrx4lIxDdXIPehlhZEYv6sgwt8lSv+fqF+q32P1XcjZ4Bu7mfPHMTABHs8ySzWWq8keDq7hwHPoPJ0zm8jkKsleIveY/s=) 2026-02-19 02:46:01.915228 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIPhbpnczCY1QBmLijDR1ZVUytM9RBRDLKdzTpATrulrfO55alSJ57VZDvtmp8HdYtjSeqwtpJNuQfkMyNeexCA=) 2026-02-19 02:46:01.915235 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEVDa+DtF+XydJGczf/uJ0d1b2o0v3VArZ+gjHLDzdCP) 2026-02-19 02:46:01.915242 | orchestrator | 2026-02-19 02:46:01.915248 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-19 02:46:01.915272 | orchestrator | Thursday 19 February 2026 02:46:00 +0000 (0:00:01.136) 0:00:27.776 ***** 2026-02-19 02:46:01.915281 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-19 02:46:01.915288 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-19 02:46:01.915294 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-19 02:46:01.915301 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-19 02:46:01.915307 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-19 02:46:01.915314 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-19 02:46:01.915320 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-19 02:46:01.915327 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:46:01.915334 | orchestrator | 2026-02-19 02:46:01.915355 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-19 02:46:01.915362 | orchestrator | Thursday 19 February 2026 02:46:00 +0000 (0:00:00.195) 0:00:27.971 ***** 2026-02-19 02:46:01.915369 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:46:01.915376 | orchestrator | 2026-02-19 02:46:01.915383 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-19 02:46:01.915389 | orchestrator | Thursday 19 February 2026 02:46:00 +0000 (0:00:00.063) 0:00:28.035 ***** 2026-02-19 02:46:01.915400 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:46:01.915407 | orchestrator | 2026-02-19 02:46:01.915413 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-19 02:46:01.915420 | orchestrator | Thursday 19 February 2026 02:46:00 +0000 (0:00:00.046) 0:00:28.081 ***** 2026-02-19 02:46:01.915427 | orchestrator | changed: [testbed-manager] 2026-02-19 02:46:01.915434 | orchestrator | 2026-02-19 02:46:01.915441 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:46:01.915447 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 02:46:01.915455 | orchestrator | 2026-02-19 02:46:01.915462 | orchestrator | 2026-02-19 02:46:01.915469 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:46:01.915475 | orchestrator | Thursday 19 February 2026 02:46:01 +0000 (0:00:00.764) 0:00:28.846 ***** 2026-02-19 02:46:01.915482 | orchestrator | =============================================================================== 2026-02-19 02:46:01.915489 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.99s 2026-02-19 02:46:01.915495 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.50s 2026-02-19 02:46:01.915503 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-02-19 02:46:01.915510 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-19 02:46:01.915516 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-19 02:46:01.915523 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-02-19 02:46:01.915530 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-19 02:46:01.915536 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-19 02:46:01.915543 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-19 02:46:01.915549 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-19 02:46:01.915556 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-19 02:46:01.915563 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-19 02:46:01.915569 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-19 02:46:01.915576 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-19 02:46:01.915587 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-19 02:46:01.915594 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-19 02:46:01.915601 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.76s 2026-02-19 02:46:01.915607 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-02-19 02:46:01.915614 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2026-02-19 02:46:01.915621 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-19 02:46:02.229070 | orchestrator | + osism apply squid 2026-02-19 02:46:14.185212 | orchestrator | 2026-02-19 02:46:14 | INFO  | Task 216cd82e-b424-43dd-b59b-29042e21ea8a (squid) was prepared for execution. 2026-02-19 02:46:14.185295 | orchestrator | 2026-02-19 02:46:14 | INFO  | It takes a moment until task 216cd82e-b424-43dd-b59b-29042e21ea8a (squid) has been started and output is visible here. 2026-02-19 02:48:07.611615 | orchestrator | 2026-02-19 02:48:07.611702 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-19 02:48:07.611712 | orchestrator | 2026-02-19 02:48:07.611720 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-19 02:48:07.611727 | orchestrator | Thursday 19 February 2026 02:46:18 +0000 (0:00:00.165) 0:00:00.165 ***** 2026-02-19 02:48:07.611734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 02:48:07.611741 | orchestrator | 2026-02-19 02:48:07.611748 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-19 02:48:07.611755 | orchestrator | Thursday 19 February 2026 02:46:18 +0000 (0:00:00.076) 0:00:00.242 ***** 2026-02-19 02:48:07.611761 | orchestrator | ok: [testbed-manager] 2026-02-19 02:48:07.611768 | orchestrator | 2026-02-19 02:48:07.611775 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-19 02:48:07.611781 | orchestrator | Thursday 19 February 2026 02:46:19 +0000 (0:00:01.457) 0:00:01.699 ***** 2026-02-19 02:48:07.611789 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-19 02:48:07.611795 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-19 02:48:07.611801 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-19 02:48:07.611808 | orchestrator | 2026-02-19 02:48:07.611814 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-19 02:48:07.611820 | orchestrator | Thursday 19 February 2026 02:46:21 +0000 (0:00:01.150) 0:00:02.849 ***** 2026-02-19 02:48:07.611827 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-19 02:48:07.611833 | orchestrator | 2026-02-19 02:48:07.611839 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-19 02:48:07.611846 | orchestrator | Thursday 19 February 2026 02:46:22 +0000 (0:00:01.093) 0:00:03.943 ***** 2026-02-19 02:48:07.611852 | orchestrator | ok: [testbed-manager] 2026-02-19 02:48:07.611858 | orchestrator | 2026-02-19 02:48:07.611865 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-19 02:48:07.611871 | orchestrator | Thursday 19 February 2026 02:46:22 +0000 (0:00:00.380) 0:00:04.324 ***** 2026-02-19 02:48:07.611878 | orchestrator | changed: [testbed-manager] 2026-02-19 02:48:07.611885 | orchestrator | 2026-02-19 02:48:07.611891 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-19 02:48:07.611897 | orchestrator | Thursday 19 February 2026 02:46:23 +0000 (0:00:00.966) 0:00:05.290 ***** 2026-02-19 02:48:07.611904 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-19 02:48:07.611914 | orchestrator | ok: [testbed-manager] 2026-02-19 02:48:07.611921 | orchestrator | 2026-02-19 02:48:07.611927 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-19 02:48:07.611952 | orchestrator | Thursday 19 February 2026 02:46:54 +0000 (0:00:31.098) 0:00:36.388 ***** 2026-02-19 02:48:07.611959 | orchestrator | changed: [testbed-manager] 2026-02-19 02:48:07.611965 | orchestrator | 2026-02-19 02:48:07.611971 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-19 02:48:07.611977 | orchestrator | Thursday 19 February 2026 02:47:06 +0000 (0:00:11.962) 0:00:48.351 ***** 2026-02-19 02:48:07.611984 | orchestrator | Pausing for 60 seconds 2026-02-19 02:48:07.611991 | orchestrator | changed: [testbed-manager] 2026-02-19 02:48:07.611997 | orchestrator | 2026-02-19 02:48:07.612003 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-19 02:48:07.612010 | orchestrator | Thursday 19 February 2026 02:48:06 +0000 (0:01:00.081) 0:01:48.432 ***** 2026-02-19 02:48:07.612016 | orchestrator | ok: [testbed-manager] 2026-02-19 02:48:07.612022 | orchestrator | 2026-02-19 02:48:07.612032 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-19 02:48:07.612042 | orchestrator | Thursday 19 February 2026 02:48:06 +0000 (0:00:00.070) 0:01:48.502 ***** 2026-02-19 02:48:07.612056 | orchestrator | changed: [testbed-manager] 2026-02-19 02:48:07.612113 | orchestrator | 2026-02-19 02:48:07.612124 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:48:07.612134 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:48:07.612143 | orchestrator | 2026-02-19 02:48:07.612153 | orchestrator | 2026-02-19 02:48:07.612163 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:48:07.612172 | orchestrator | Thursday 19 February 2026 02:48:07 +0000 (0:00:00.599) 0:01:49.102 ***** 2026-02-19 02:48:07.612183 | orchestrator | =============================================================================== 2026-02-19 02:48:07.612210 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-02-19 02:48:07.612222 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.10s 2026-02-19 02:48:07.612233 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.96s 2026-02-19 02:48:07.612245 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.46s 2026-02-19 02:48:07.612255 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2026-02-19 02:48:07.612265 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2026-02-19 02:48:07.612275 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2026-02-19 02:48:07.612283 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-02-19 02:48:07.612290 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-02-19 02:48:07.612298 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-02-19 02:48:07.612305 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-02-19 02:48:07.869257 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-19 02:48:07.869324 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-19 02:48:07.918838 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-19 02:48:07.918920 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-19 02:48:07.925241 | orchestrator | + set -e 2026-02-19 02:48:07.925299 | orchestrator | + NAMESPACE=kolla/release 2026-02-19 02:48:07.925311 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-19 02:48:07.932046 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-19 02:48:08.004249 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-19 02:48:08.004472 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-19 02:48:20.039169 | orchestrator | 2026-02-19 02:48:20 | INFO  | Task 1a8ea2bd-406d-419b-9313-4517f9acae36 (operator) was prepared for execution. 2026-02-19 02:48:20.039255 | orchestrator | 2026-02-19 02:48:20 | INFO  | It takes a moment until task 1a8ea2bd-406d-419b-9313-4517f9acae36 (operator) has been started and output is visible here. 2026-02-19 02:48:35.772469 | orchestrator | 2026-02-19 02:48:35.772613 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-19 02:48:35.772642 | orchestrator | 2026-02-19 02:48:35.772663 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 02:48:35.772681 | orchestrator | Thursday 19 February 2026 02:48:23 +0000 (0:00:00.105) 0:00:00.105 ***** 2026-02-19 02:48:35.772700 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:48:35.772720 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:48:35.772741 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:48:35.772761 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:48:35.772781 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:48:35.772801 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:48:35.772822 | orchestrator | 2026-02-19 02:48:35.772843 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-19 02:48:35.772863 | orchestrator | Thursday 19 February 2026 02:48:27 +0000 (0:00:03.365) 0:00:03.471 ***** 2026-02-19 02:48:35.772884 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:48:35.772904 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:48:35.772925 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:48:35.772985 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:48:35.773009 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:48:35.773033 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:48:35.773056 | orchestrator | 2026-02-19 02:48:35.773080 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-19 02:48:35.773103 | orchestrator | 2026-02-19 02:48:35.773160 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-19 02:48:35.773181 | orchestrator | Thursday 19 February 2026 02:48:27 +0000 (0:00:00.729) 0:00:04.200 ***** 2026-02-19 02:48:35.773200 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:48:35.773219 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:48:35.773239 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:48:35.773258 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:48:35.773276 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:48:35.773295 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:48:35.773313 | orchestrator | 2026-02-19 02:48:35.773332 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-19 02:48:35.773352 | orchestrator | Thursday 19 February 2026 02:48:28 +0000 (0:00:00.163) 0:00:04.364 ***** 2026-02-19 02:48:35.773370 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:48:35.773389 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:48:35.773409 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:48:35.773428 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:48:35.773448 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:48:35.773468 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:48:35.773489 | orchestrator | 2026-02-19 02:48:35.773508 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-19 02:48:35.773526 | orchestrator | Thursday 19 February 2026 02:48:28 +0000 (0:00:00.186) 0:00:04.550 ***** 2026-02-19 02:48:35.773544 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:48:35.773565 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:48:35.773584 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:48:35.773601 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:48:35.773618 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:48:35.773637 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:48:35.773655 | orchestrator | 2026-02-19 02:48:35.773674 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-19 02:48:35.773692 | orchestrator | Thursday 19 February 2026 02:48:28 +0000 (0:00:00.689) 0:00:05.239 ***** 2026-02-19 02:48:35.773710 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:48:35.773729 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:48:35.773747 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:48:35.773766 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:48:35.773783 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:48:35.773802 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:48:35.773856 | orchestrator | 2026-02-19 02:48:35.773869 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-19 02:48:35.773880 | orchestrator | Thursday 19 February 2026 02:48:29 +0000 (0:00:00.809) 0:00:06.049 ***** 2026-02-19 02:48:35.773891 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-19 02:48:35.773902 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-19 02:48:35.773913 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-19 02:48:35.773924 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-19 02:48:35.773934 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-19 02:48:35.773945 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-19 02:48:35.773956 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-19 02:48:35.773967 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-19 02:48:35.773978 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-19 02:48:35.773988 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-19 02:48:35.773999 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-19 02:48:35.774009 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-19 02:48:35.774087 | orchestrator | 2026-02-19 02:48:35.774098 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-19 02:48:35.774109 | orchestrator | Thursday 19 February 2026 02:48:31 +0000 (0:00:01.284) 0:00:07.333 ***** 2026-02-19 02:48:35.774143 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:48:35.774155 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:48:35.774165 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:48:35.774176 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:48:35.774186 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:48:35.774197 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:48:35.774208 | orchestrator | 2026-02-19 02:48:35.774219 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-19 02:48:35.774231 | orchestrator | Thursday 19 February 2026 02:48:32 +0000 (0:00:01.195) 0:00:08.529 ***** 2026-02-19 02:48:35.774242 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-19 02:48:35.774253 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-19 02:48:35.774263 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-19 02:48:35.774274 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-19 02:48:35.774310 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-19 02:48:35.774322 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-19 02:48:35.774333 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-19 02:48:35.774344 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-19 02:48:35.774355 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-19 02:48:35.774365 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-19 02:48:35.774376 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-19 02:48:35.774387 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-19 02:48:35.774396 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-19 02:48:35.774406 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-19 02:48:35.774415 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-19 02:48:35.774425 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-19 02:48:35.774435 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-19 02:48:35.774444 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-19 02:48:35.774454 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-19 02:48:35.774464 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-19 02:48:35.774482 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-19 02:48:35.774492 | orchestrator | 2026-02-19 02:48:35.774502 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-19 02:48:35.774512 | orchestrator | Thursday 19 February 2026 02:48:33 +0000 (0:00:01.372) 0:00:09.901 ***** 2026-02-19 02:48:35.774522 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:48:35.774532 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:48:35.774541 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:48:35.774550 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:48:35.774560 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:48:35.774570 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:48:35.774579 | orchestrator | 2026-02-19 02:48:35.774589 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-19 02:48:35.774598 | orchestrator | Thursday 19 February 2026 02:48:33 +0000 (0:00:00.137) 0:00:10.039 ***** 2026-02-19 02:48:35.774608 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:48:35.774617 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:48:35.774626 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:48:35.774636 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:48:35.774645 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:48:35.774654 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:48:35.774664 | orchestrator | 2026-02-19 02:48:35.774674 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-19 02:48:35.774683 | orchestrator | Thursday 19 February 2026 02:48:33 +0000 (0:00:00.175) 0:00:10.214 ***** 2026-02-19 02:48:35.774693 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:48:35.774702 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:48:35.774711 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:48:35.774721 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:48:35.774730 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:48:35.774739 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:48:35.774749 | orchestrator | 2026-02-19 02:48:35.774758 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-19 02:48:35.774768 | orchestrator | Thursday 19 February 2026 02:48:34 +0000 (0:00:00.631) 0:00:10.846 ***** 2026-02-19 02:48:35.774777 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:48:35.774787 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:48:35.774796 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:48:35.774805 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:48:35.774825 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:48:35.774835 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:48:35.774845 | orchestrator | 2026-02-19 02:48:35.774855 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-19 02:48:35.774864 | orchestrator | Thursday 19 February 2026 02:48:34 +0000 (0:00:00.149) 0:00:10.996 ***** 2026-02-19 02:48:35.774874 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-19 02:48:35.774884 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:48:35.774893 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-19 02:48:35.774902 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-19 02:48:35.774912 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:48:35.774921 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:48:35.774930 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-19 02:48:35.774940 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:48:35.774949 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 02:48:35.774959 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-19 02:48:35.774968 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:48:35.774977 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:48:35.774987 | orchestrator | 2026-02-19 02:48:35.774996 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-19 02:48:35.775006 | orchestrator | Thursday 19 February 2026 02:48:35 +0000 (0:00:00.735) 0:00:11.731 ***** 2026-02-19 02:48:35.775021 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:48:35.775031 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:48:35.775040 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:48:35.775050 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:48:35.775059 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:48:35.775069 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:48:35.775078 | orchestrator | 2026-02-19 02:48:35.775088 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-19 02:48:35.775098 | orchestrator | Thursday 19 February 2026 02:48:35 +0000 (0:00:00.138) 0:00:11.870 ***** 2026-02-19 02:48:35.775107 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:48:35.775134 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:48:35.775144 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:48:35.775154 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:48:35.775170 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:48:37.150720 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:48:37.150796 | orchestrator | 2026-02-19 02:48:37.150803 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-19 02:48:37.150809 | orchestrator | Thursday 19 February 2026 02:48:35 +0000 (0:00:00.155) 0:00:12.025 ***** 2026-02-19 02:48:37.150814 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:48:37.150818 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:48:37.150823 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:48:37.150827 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:48:37.150832 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:48:37.150836 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:48:37.150841 | orchestrator | 2026-02-19 02:48:37.150845 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-19 02:48:37.150850 | orchestrator | Thursday 19 February 2026 02:48:35 +0000 (0:00:00.166) 0:00:12.191 ***** 2026-02-19 02:48:37.150854 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:48:37.150858 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:48:37.150877 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:48:37.150881 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:48:37.150886 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:48:37.150890 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:48:37.150894 | orchestrator | 2026-02-19 02:48:37.150898 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-19 02:48:37.150903 | orchestrator | Thursday 19 February 2026 02:48:36 +0000 (0:00:00.672) 0:00:12.863 ***** 2026-02-19 02:48:37.150907 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:48:37.150911 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:48:37.150916 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:48:37.150920 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:48:37.150925 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:48:37.150929 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:48:37.150933 | orchestrator | 2026-02-19 02:48:37.150937 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:48:37.150942 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 02:48:37.150949 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 02:48:37.150953 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 02:48:37.150958 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 02:48:37.150962 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 02:48:37.150982 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 02:48:37.150987 | orchestrator | 2026-02-19 02:48:37.150991 | orchestrator | 2026-02-19 02:48:37.150996 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:48:37.151000 | orchestrator | Thursday 19 February 2026 02:48:36 +0000 (0:00:00.251) 0:00:13.115 ***** 2026-02-19 02:48:37.151004 | orchestrator | =============================================================================== 2026-02-19 02:48:37.151008 | orchestrator | Gathering Facts --------------------------------------------------------- 3.37s 2026-02-19 02:48:37.151013 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.37s 2026-02-19 02:48:37.151018 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.28s 2026-02-19 02:48:37.151023 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-02-19 02:48:37.151027 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-02-19 02:48:37.151031 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2026-02-19 02:48:37.151036 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2026-02-19 02:48:37.151040 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.69s 2026-02-19 02:48:37.151044 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-02-19 02:48:37.151048 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2026-02-19 02:48:37.151053 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-02-19 02:48:37.151057 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-02-19 02:48:37.151061 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-02-19 02:48:37.151066 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-19 02:48:37.151070 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-02-19 02:48:37.151074 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-02-19 02:48:37.151078 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-02-19 02:48:37.151083 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-02-19 02:48:37.151087 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-02-19 02:48:37.464569 | orchestrator | + osism apply --environment custom facts 2026-02-19 02:48:39.389496 | orchestrator | 2026-02-19 02:48:39 | INFO  | Trying to run play facts in environment custom 2026-02-19 02:48:49.612689 | orchestrator | 2026-02-19 02:48:49 | INFO  | Task d613632d-7986-43cd-97a7-99b5ec3b78b2 (facts) was prepared for execution. 2026-02-19 02:48:49.612787 | orchestrator | 2026-02-19 02:48:49 | INFO  | It takes a moment until task d613632d-7986-43cd-97a7-99b5ec3b78b2 (facts) has been started and output is visible here. 2026-02-19 02:49:36.316405 | orchestrator | 2026-02-19 02:49:36.316544 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-19 02:49:36.316562 | orchestrator | 2026-02-19 02:49:36.316575 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-19 02:49:36.316588 | orchestrator | Thursday 19 February 2026 02:48:53 +0000 (0:00:00.063) 0:00:00.063 ***** 2026-02-19 02:49:36.316600 | orchestrator | ok: [testbed-manager] 2026-02-19 02:49:36.316613 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:49:36.316625 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:49:36.316636 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:49:36.316648 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:49:36.316659 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:49:36.316696 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:49:36.316709 | orchestrator | 2026-02-19 02:49:36.316720 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-19 02:49:36.316731 | orchestrator | Thursday 19 February 2026 02:48:54 +0000 (0:00:01.312) 0:00:01.375 ***** 2026-02-19 02:49:36.316743 | orchestrator | ok: [testbed-manager] 2026-02-19 02:49:36.316754 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:49:36.316765 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:49:36.316776 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:49:36.316786 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:49:36.316797 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:49:36.316821 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:49:36.316844 | orchestrator | 2026-02-19 02:49:36.316855 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-19 02:49:36.316866 | orchestrator | 2026-02-19 02:49:36.316877 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-19 02:49:36.316888 | orchestrator | Thursday 19 February 2026 02:48:56 +0000 (0:00:01.194) 0:00:02.570 ***** 2026-02-19 02:49:36.316899 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:49:36.316910 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:49:36.316921 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:49:36.316934 | orchestrator | 2026-02-19 02:49:36.316946 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-19 02:49:36.316959 | orchestrator | Thursday 19 February 2026 02:48:56 +0000 (0:00:00.075) 0:00:02.645 ***** 2026-02-19 02:49:36.316971 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:49:36.316984 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:49:36.316996 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:49:36.317009 | orchestrator | 2026-02-19 02:49:36.317021 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-19 02:49:36.317032 | orchestrator | Thursday 19 February 2026 02:48:56 +0000 (0:00:00.189) 0:00:02.834 ***** 2026-02-19 02:49:36.317043 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:49:36.317054 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:49:36.317065 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:49:36.317075 | orchestrator | 2026-02-19 02:49:36.317086 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-19 02:49:36.317098 | orchestrator | Thursday 19 February 2026 02:48:56 +0000 (0:00:00.189) 0:00:03.024 ***** 2026-02-19 02:49:36.317110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 02:49:36.317122 | orchestrator | 2026-02-19 02:49:36.317133 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-19 02:49:36.317144 | orchestrator | Thursday 19 February 2026 02:48:56 +0000 (0:00:00.117) 0:00:03.142 ***** 2026-02-19 02:49:36.317155 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:49:36.317166 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:49:36.317177 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:49:36.317187 | orchestrator | 2026-02-19 02:49:36.317198 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-19 02:49:36.317209 | orchestrator | Thursday 19 February 2026 02:48:57 +0000 (0:00:00.444) 0:00:03.586 ***** 2026-02-19 02:49:36.317320 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:49:36.317335 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:49:36.317346 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:49:36.317357 | orchestrator | 2026-02-19 02:49:36.317368 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-19 02:49:36.317379 | orchestrator | Thursday 19 February 2026 02:48:57 +0000 (0:00:00.115) 0:00:03.702 ***** 2026-02-19 02:49:36.317390 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:49:36.317400 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:49:36.317411 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:49:36.317422 | orchestrator | 2026-02-19 02:49:36.317433 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-19 02:49:36.317465 | orchestrator | Thursday 19 February 2026 02:48:58 +0000 (0:00:01.147) 0:00:04.850 ***** 2026-02-19 02:49:36.317476 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:49:36.317487 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:49:36.317498 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:49:36.317509 | orchestrator | 2026-02-19 02:49:36.317520 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-19 02:49:36.317578 | orchestrator | Thursday 19 February 2026 02:48:58 +0000 (0:00:00.469) 0:00:05.320 ***** 2026-02-19 02:49:36.317591 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:49:36.317603 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:49:36.317614 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:49:36.317624 | orchestrator | 2026-02-19 02:49:36.317635 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-19 02:49:36.317646 | orchestrator | Thursday 19 February 2026 02:48:59 +0000 (0:00:01.126) 0:00:06.447 ***** 2026-02-19 02:49:36.317657 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:49:36.317668 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:49:36.317679 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:49:36.317690 | orchestrator | 2026-02-19 02:49:36.317700 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-19 02:49:36.317712 | orchestrator | Thursday 19 February 2026 02:49:17 +0000 (0:00:17.469) 0:00:23.916 ***** 2026-02-19 02:49:36.317723 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:49:36.317733 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:49:36.317744 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:49:36.317755 | orchestrator | 2026-02-19 02:49:36.317766 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-19 02:49:36.317797 | orchestrator | Thursday 19 February 2026 02:49:17 +0000 (0:00:00.109) 0:00:24.026 ***** 2026-02-19 02:49:36.317807 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:49:36.317817 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:49:36.317827 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:49:36.317836 | orchestrator | 2026-02-19 02:49:36.317846 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-19 02:49:36.317861 | orchestrator | Thursday 19 February 2026 02:49:26 +0000 (0:00:09.229) 0:00:33.255 ***** 2026-02-19 02:49:36.317871 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:49:36.317881 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:49:36.317890 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:49:36.317900 | orchestrator | 2026-02-19 02:49:36.317909 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-19 02:49:36.317919 | orchestrator | Thursday 19 February 2026 02:49:27 +0000 (0:00:00.526) 0:00:33.782 ***** 2026-02-19 02:49:36.317929 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-19 02:49:36.317939 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-19 02:49:36.317949 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-19 02:49:36.317959 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-19 02:49:36.317968 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-19 02:49:36.317978 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-19 02:49:36.317987 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-19 02:49:36.317997 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-19 02:49:36.318006 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-19 02:49:36.318071 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-19 02:49:36.318084 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-19 02:49:36.318094 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-19 02:49:36.318103 | orchestrator | 2026-02-19 02:49:36.318113 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-19 02:49:36.318130 | orchestrator | Thursday 19 February 2026 02:49:31 +0000 (0:00:03.859) 0:00:37.642 ***** 2026-02-19 02:49:36.318140 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:49:36.318149 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:49:36.318159 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:49:36.318169 | orchestrator | 2026-02-19 02:49:36.318178 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-19 02:49:36.318188 | orchestrator | 2026-02-19 02:49:36.318197 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-19 02:49:36.318207 | orchestrator | Thursday 19 February 2026 02:49:32 +0000 (0:00:01.399) 0:00:39.041 ***** 2026-02-19 02:49:36.318233 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:49:36.318244 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:49:36.318253 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:49:36.318263 | orchestrator | ok: [testbed-manager] 2026-02-19 02:49:36.318273 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:49:36.318282 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:49:36.318291 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:49:36.318301 | orchestrator | 2026-02-19 02:49:36.318311 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:49:36.318321 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:49:36.318336 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:49:36.318353 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:49:36.318370 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:49:36.318387 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 02:49:36.318402 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 02:49:36.318419 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 02:49:36.318434 | orchestrator | 2026-02-19 02:49:36.318449 | orchestrator | 2026-02-19 02:49:36.318465 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:49:36.318481 | orchestrator | Thursday 19 February 2026 02:49:36 +0000 (0:00:03.807) 0:00:42.849 ***** 2026-02-19 02:49:36.318496 | orchestrator | =============================================================================== 2026-02-19 02:49:36.318512 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.47s 2026-02-19 02:49:36.318526 | orchestrator | Install required packages (Debian) -------------------------------------- 9.23s 2026-02-19 02:49:36.318541 | orchestrator | Copy fact files --------------------------------------------------------- 3.86s 2026-02-19 02:49:36.318557 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.81s 2026-02-19 02:49:36.318573 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.40s 2026-02-19 02:49:36.318590 | orchestrator | Create custom facts directory ------------------------------------------- 1.31s 2026-02-19 02:49:36.318612 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2026-02-19 02:49:36.505973 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.15s 2026-02-19 02:49:36.506196 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.13s 2026-02-19 02:49:36.506368 | orchestrator | Create custom facts directory ------------------------------------------- 0.53s 2026-02-19 02:49:36.506427 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-02-19 02:49:36.506445 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-02-19 02:49:36.506462 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-02-19 02:49:36.506479 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-02-19 02:49:36.506499 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-02-19 02:49:36.506519 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-02-19 02:49:36.506536 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-02-19 02:49:36.506553 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-02-19 02:49:36.775561 | orchestrator | + osism apply bootstrap 2026-02-19 02:49:48.851777 | orchestrator | 2026-02-19 02:49:48 | INFO  | Task 458f3760-2615-4ca9-bf67-4349e5c2b323 (bootstrap) was prepared for execution. 2026-02-19 02:49:48.851909 | orchestrator | 2026-02-19 02:49:48 | INFO  | It takes a moment until task 458f3760-2615-4ca9-bf67-4349e5c2b323 (bootstrap) has been started and output is visible here. 2026-02-19 02:50:04.409938 | orchestrator | 2026-02-19 02:50:04.410135 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-19 02:50:04.410157 | orchestrator | 2026-02-19 02:50:04.410171 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-19 02:50:04.410182 | orchestrator | Thursday 19 February 2026 02:49:52 +0000 (0:00:00.111) 0:00:00.111 ***** 2026-02-19 02:50:04.410194 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:04.410206 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:04.410218 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:04.410229 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:04.410239 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:04.410250 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:04.410261 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:04.410325 | orchestrator | 2026-02-19 02:50:04.410337 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-19 02:50:04.410348 | orchestrator | 2026-02-19 02:50:04.410359 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-19 02:50:04.410371 | orchestrator | Thursday 19 February 2026 02:49:52 +0000 (0:00:00.159) 0:00:00.270 ***** 2026-02-19 02:50:04.410382 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:04.410393 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:04.410403 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:04.410414 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:04.410425 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:04.410436 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:04.410447 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:04.410458 | orchestrator | 2026-02-19 02:50:04.410471 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-19 02:50:04.410484 | orchestrator | 2026-02-19 02:50:04.410497 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-19 02:50:04.410510 | orchestrator | Thursday 19 February 2026 02:49:56 +0000 (0:00:03.672) 0:00:03.943 ***** 2026-02-19 02:50:04.410523 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-19 02:50:04.410537 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-19 02:50:04.410550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-19 02:50:04.410563 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-19 02:50:04.410575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 02:50:04.410589 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-19 02:50:04.410601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 02:50:04.410613 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-19 02:50:04.410626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 02:50:04.410667 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-19 02:50:04.410680 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-19 02:50:04.410693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 02:50:04.410840 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-19 02:50:04.410855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 02:50:04.410866 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-19 02:50:04.410877 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 02:50:04.410888 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:04.410899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 02:50:04.410910 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:50:04.410921 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 02:50:04.410939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-19 02:50:04.410957 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 02:50:04.410974 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-19 02:50:04.410993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 02:50:04.411013 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 02:50:04.411027 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 02:50:04.411038 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 02:50:04.411048 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 02:50:04.411059 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 02:50:04.411070 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 02:50:04.411080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 02:50:04.411091 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 02:50:04.411102 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 02:50:04.411112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 02:50:04.411123 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 02:50:04.411134 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 02:50:04.411144 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:50:04.411155 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-19 02:50:04.411166 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 02:50:04.411176 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 02:50:04.411187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 02:50:04.411197 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 02:50:04.411208 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:50:04.411218 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 02:50:04.411229 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 02:50:04.411240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 02:50:04.411251 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:50:04.411308 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 02:50:04.411321 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 02:50:04.411332 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 02:50:04.411362 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:50:04.411374 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 02:50:04.411384 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 02:50:04.411395 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 02:50:04.411419 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 02:50:04.411430 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:50:04.411441 | orchestrator | 2026-02-19 02:50:04.411452 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-19 02:50:04.411463 | orchestrator | 2026-02-19 02:50:04.411474 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-19 02:50:04.411485 | orchestrator | Thursday 19 February 2026 02:49:56 +0000 (0:00:00.435) 0:00:04.378 ***** 2026-02-19 02:50:04.411496 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:04.411507 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:04.411518 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:04.411528 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:04.411539 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:04.411550 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:04.411561 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:04.411571 | orchestrator | 2026-02-19 02:50:04.411583 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-19 02:50:04.411594 | orchestrator | Thursday 19 February 2026 02:49:58 +0000 (0:00:01.276) 0:00:05.655 ***** 2026-02-19 02:50:04.411604 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:04.411615 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:04.411626 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:04.411637 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:04.411647 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:04.411658 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:04.411669 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:04.411680 | orchestrator | 2026-02-19 02:50:04.411690 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-19 02:50:04.411701 | orchestrator | Thursday 19 February 2026 02:49:59 +0000 (0:00:01.229) 0:00:06.884 ***** 2026-02-19 02:50:04.411713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:04.411726 | orchestrator | 2026-02-19 02:50:04.411737 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-19 02:50:04.411748 | orchestrator | Thursday 19 February 2026 02:49:59 +0000 (0:00:00.285) 0:00:07.170 ***** 2026-02-19 02:50:04.411760 | orchestrator | changed: [testbed-manager] 2026-02-19 02:50:04.411771 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:04.411782 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:04.411793 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:04.411804 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:04.411814 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:04.411825 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:04.411836 | orchestrator | 2026-02-19 02:50:04.411847 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-19 02:50:04.411858 | orchestrator | Thursday 19 February 2026 02:50:01 +0000 (0:00:02.059) 0:00:09.229 ***** 2026-02-19 02:50:04.411869 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:04.411915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:04.411929 | orchestrator | 2026-02-19 02:50:04.411940 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-19 02:50:04.411951 | orchestrator | Thursday 19 February 2026 02:50:02 +0000 (0:00:00.276) 0:00:09.505 ***** 2026-02-19 02:50:04.411962 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:04.411976 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:04.411994 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:04.412014 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:04.412033 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:04.412053 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:04.412078 | orchestrator | 2026-02-19 02:50:04.412103 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-19 02:50:04.412115 | orchestrator | Thursday 19 February 2026 02:50:03 +0000 (0:00:01.031) 0:00:10.537 ***** 2026-02-19 02:50:04.412126 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:04.412137 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:04.412147 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:04.412158 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:04.412169 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:04.412179 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:04.412190 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:04.412201 | orchestrator | 2026-02-19 02:50:04.412212 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-19 02:50:04.412223 | orchestrator | Thursday 19 February 2026 02:50:03 +0000 (0:00:00.727) 0:00:11.265 ***** 2026-02-19 02:50:04.412233 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:50:04.412244 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:50:04.412255 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:50:04.412291 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:50:04.412304 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:50:04.412315 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:50:04.412326 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:04.412336 | orchestrator | 2026-02-19 02:50:04.412347 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-19 02:50:04.412360 | orchestrator | Thursday 19 February 2026 02:50:04 +0000 (0:00:00.405) 0:00:11.670 ***** 2026-02-19 02:50:04.412370 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:04.412381 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:50:04.412400 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:50:16.670554 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:50:16.670686 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:50:16.670706 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:50:16.670717 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:50:16.670729 | orchestrator | 2026-02-19 02:50:16.670742 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-19 02:50:16.670755 | orchestrator | Thursday 19 February 2026 02:50:04 +0000 (0:00:00.201) 0:00:11.872 ***** 2026-02-19 02:50:16.670768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:16.670796 | orchestrator | 2026-02-19 02:50:16.670808 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-19 02:50:16.670820 | orchestrator | Thursday 19 February 2026 02:50:04 +0000 (0:00:00.273) 0:00:12.145 ***** 2026-02-19 02:50:16.670831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:16.670843 | orchestrator | 2026-02-19 02:50:16.670854 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-19 02:50:16.670864 | orchestrator | Thursday 19 February 2026 02:50:05 +0000 (0:00:00.255) 0:00:12.401 ***** 2026-02-19 02:50:16.670875 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.670887 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:16.670898 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.670908 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:16.670920 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:16.670931 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.670941 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.670952 | orchestrator | 2026-02-19 02:50:16.670962 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-19 02:50:16.670974 | orchestrator | Thursday 19 February 2026 02:50:06 +0000 (0:00:01.507) 0:00:13.908 ***** 2026-02-19 02:50:16.671009 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:16.671020 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:50:16.671031 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:50:16.671042 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:50:16.671052 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:50:16.671063 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:50:16.671075 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:50:16.671088 | orchestrator | 2026-02-19 02:50:16.671100 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-19 02:50:16.671113 | orchestrator | Thursday 19 February 2026 02:50:06 +0000 (0:00:00.280) 0:00:14.189 ***** 2026-02-19 02:50:16.671125 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.671138 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.671150 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.671162 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.671174 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:16.671186 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:16.671198 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:16.671211 | orchestrator | 2026-02-19 02:50:16.671223 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-19 02:50:16.671235 | orchestrator | Thursday 19 February 2026 02:50:07 +0000 (0:00:00.548) 0:00:14.738 ***** 2026-02-19 02:50:16.671248 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:16.671260 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:50:16.671272 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:50:16.671312 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:50:16.671326 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:50:16.671337 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:50:16.671350 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:50:16.671362 | orchestrator | 2026-02-19 02:50:16.671374 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-19 02:50:16.671387 | orchestrator | Thursday 19 February 2026 02:50:07 +0000 (0:00:00.256) 0:00:14.994 ***** 2026-02-19 02:50:16.671399 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.671412 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:16.671425 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:16.671437 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:16.671449 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:16.671460 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:16.671480 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:16.671491 | orchestrator | 2026-02-19 02:50:16.671502 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-19 02:50:16.671516 | orchestrator | Thursday 19 February 2026 02:50:08 +0000 (0:00:00.631) 0:00:15.625 ***** 2026-02-19 02:50:16.671534 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.671553 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:16.671572 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:16.671591 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:16.671608 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:16.671626 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:16.671644 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:16.671663 | orchestrator | 2026-02-19 02:50:16.671682 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-19 02:50:16.671701 | orchestrator | Thursday 19 February 2026 02:50:09 +0000 (0:00:01.166) 0:00:16.792 ***** 2026-02-19 02:50:16.671719 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.671733 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.671743 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:16.671754 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.671765 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.671775 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:16.671786 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:16.671796 | orchestrator | 2026-02-19 02:50:16.671807 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-19 02:50:16.671828 | orchestrator | Thursday 19 February 2026 02:50:10 +0000 (0:00:01.355) 0:00:18.148 ***** 2026-02-19 02:50:16.671858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:16.671870 | orchestrator | 2026-02-19 02:50:16.671881 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-19 02:50:16.671891 | orchestrator | Thursday 19 February 2026 02:50:11 +0000 (0:00:00.286) 0:00:18.434 ***** 2026-02-19 02:50:16.671902 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:16.671913 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:16.671924 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:16.671934 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:16.671945 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:16.671956 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:16.671967 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:16.671977 | orchestrator | 2026-02-19 02:50:16.671988 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-19 02:50:16.672000 | orchestrator | Thursday 19 February 2026 02:50:12 +0000 (0:00:01.276) 0:00:19.711 ***** 2026-02-19 02:50:16.672010 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.672021 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.672032 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.672042 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.672053 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:16.672064 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:16.672074 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:16.672085 | orchestrator | 2026-02-19 02:50:16.672095 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-19 02:50:16.672106 | orchestrator | Thursday 19 February 2026 02:50:12 +0000 (0:00:00.196) 0:00:19.907 ***** 2026-02-19 02:50:16.672117 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.672127 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.672138 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.672148 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.672159 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:16.672169 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:16.672180 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:16.672191 | orchestrator | 2026-02-19 02:50:16.672204 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-19 02:50:16.672222 | orchestrator | Thursday 19 February 2026 02:50:12 +0000 (0:00:00.196) 0:00:20.104 ***** 2026-02-19 02:50:16.672241 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.672260 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.672278 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.672346 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.672366 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:16.672383 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:16.672401 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:16.672416 | orchestrator | 2026-02-19 02:50:16.672433 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-19 02:50:16.672453 | orchestrator | Thursday 19 February 2026 02:50:12 +0000 (0:00:00.191) 0:00:20.295 ***** 2026-02-19 02:50:16.672471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:16.672492 | orchestrator | 2026-02-19 02:50:16.672511 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-19 02:50:16.672528 | orchestrator | Thursday 19 February 2026 02:50:13 +0000 (0:00:00.269) 0:00:20.565 ***** 2026-02-19 02:50:16.672543 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.672554 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.672576 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.672587 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.672597 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:16.672608 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:16.672618 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:16.672629 | orchestrator | 2026-02-19 02:50:16.672640 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-19 02:50:16.672651 | orchestrator | Thursday 19 February 2026 02:50:13 +0000 (0:00:00.524) 0:00:21.089 ***** 2026-02-19 02:50:16.672661 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:16.672672 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:50:16.672683 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:50:16.672693 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:50:16.672704 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:50:16.672714 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:50:16.672725 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:50:16.672736 | orchestrator | 2026-02-19 02:50:16.672747 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-19 02:50:16.672758 | orchestrator | Thursday 19 February 2026 02:50:13 +0000 (0:00:00.211) 0:00:21.301 ***** 2026-02-19 02:50:16.672769 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.672780 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.672790 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.672801 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:16.672812 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.672822 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:16.672833 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:16.672843 | orchestrator | 2026-02-19 02:50:16.672854 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-19 02:50:16.672865 | orchestrator | Thursday 19 February 2026 02:50:14 +0000 (0:00:01.051) 0:00:22.352 ***** 2026-02-19 02:50:16.672876 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.672886 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.672897 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.672908 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.672918 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:16.672940 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:16.672951 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:16.672962 | orchestrator | 2026-02-19 02:50:16.672973 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-19 02:50:16.672984 | orchestrator | Thursday 19 February 2026 02:50:15 +0000 (0:00:00.564) 0:00:22.917 ***** 2026-02-19 02:50:16.672999 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:16.673017 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:16.673035 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:16.673054 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:16.673085 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:58.179183 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:58.179310 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:58.179334 | orchestrator | 2026-02-19 02:50:58.179348 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-19 02:50:58.179431 | orchestrator | Thursday 19 February 2026 02:50:16 +0000 (0:00:01.133) 0:00:24.051 ***** 2026-02-19 02:50:58.179441 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.179450 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.179458 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.179466 | orchestrator | changed: [testbed-manager] 2026-02-19 02:50:58.179475 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:58.179483 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:58.179492 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:58.179500 | orchestrator | 2026-02-19 02:50:58.179508 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-19 02:50:58.179517 | orchestrator | Thursday 19 February 2026 02:50:33 +0000 (0:00:16.841) 0:00:40.892 ***** 2026-02-19 02:50:58.179525 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.179554 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.179562 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.179570 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.179578 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.179586 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.179594 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.179602 | orchestrator | 2026-02-19 02:50:58.179610 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-19 02:50:58.179618 | orchestrator | Thursday 19 February 2026 02:50:33 +0000 (0:00:00.265) 0:00:41.157 ***** 2026-02-19 02:50:58.179626 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.179634 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.179642 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.179650 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.179658 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.179666 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.179674 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.179682 | orchestrator | 2026-02-19 02:50:58.179690 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-19 02:50:58.179698 | orchestrator | Thursday 19 February 2026 02:50:34 +0000 (0:00:00.262) 0:00:41.420 ***** 2026-02-19 02:50:58.179706 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.179715 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.179723 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.179733 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.179741 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.179750 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.179760 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.179770 | orchestrator | 2026-02-19 02:50:58.179779 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-19 02:50:58.179789 | orchestrator | Thursday 19 February 2026 02:50:34 +0000 (0:00:00.240) 0:00:41.660 ***** 2026-02-19 02:50:58.179799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:58.179811 | orchestrator | 2026-02-19 02:50:58.179820 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-19 02:50:58.179829 | orchestrator | Thursday 19 February 2026 02:50:34 +0000 (0:00:00.302) 0:00:41.962 ***** 2026-02-19 02:50:58.179837 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.179845 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.179853 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.179861 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.179868 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.179876 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.179884 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.179892 | orchestrator | 2026-02-19 02:50:58.179900 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-19 02:50:58.179908 | orchestrator | Thursday 19 February 2026 02:50:36 +0000 (0:00:01.869) 0:00:43.832 ***** 2026-02-19 02:50:58.179916 | orchestrator | changed: [testbed-manager] 2026-02-19 02:50:58.179924 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:58.179932 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:58.179940 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:58.179947 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:58.179955 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:58.179963 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:58.179971 | orchestrator | 2026-02-19 02:50:58.179979 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-19 02:50:58.180000 | orchestrator | Thursday 19 February 2026 02:50:37 +0000 (0:00:01.234) 0:00:45.067 ***** 2026-02-19 02:50:58.180009 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.180017 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.180025 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.180038 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.180046 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.180054 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.180062 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.180069 | orchestrator | 2026-02-19 02:50:58.180077 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-19 02:50:58.180085 | orchestrator | Thursday 19 February 2026 02:50:38 +0000 (0:00:00.896) 0:00:45.963 ***** 2026-02-19 02:50:58.180094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:58.180104 | orchestrator | 2026-02-19 02:50:58.180112 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-19 02:50:58.180124 | orchestrator | Thursday 19 February 2026 02:50:38 +0000 (0:00:00.294) 0:00:46.258 ***** 2026-02-19 02:50:58.180138 | orchestrator | changed: [testbed-manager] 2026-02-19 02:50:58.180150 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:58.180163 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:58.180176 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:58.180188 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:58.180201 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:58.180212 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:58.180225 | orchestrator | 2026-02-19 02:50:58.180263 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-19 02:50:58.180279 | orchestrator | Thursday 19 February 2026 02:50:39 +0000 (0:00:01.052) 0:00:47.310 ***** 2026-02-19 02:50:58.180292 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:50:58.180306 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:50:58.180318 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:50:58.180332 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:50:58.180341 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:50:58.180348 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:50:58.180380 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:50:58.180389 | orchestrator | 2026-02-19 02:50:58.180397 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-19 02:50:58.180405 | orchestrator | Thursday 19 February 2026 02:50:40 +0000 (0:00:00.235) 0:00:47.546 ***** 2026-02-19 02:50:58.180413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:58.180421 | orchestrator | 2026-02-19 02:50:58.180429 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-19 02:50:58.180437 | orchestrator | Thursday 19 February 2026 02:50:40 +0000 (0:00:00.311) 0:00:47.857 ***** 2026-02-19 02:50:58.180446 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.180454 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.180462 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.180470 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.180477 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.180485 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.180493 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.180501 | orchestrator | 2026-02-19 02:50:58.180509 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-19 02:50:58.180517 | orchestrator | Thursday 19 February 2026 02:50:42 +0000 (0:00:01.857) 0:00:49.715 ***** 2026-02-19 02:50:58.180525 | orchestrator | changed: [testbed-manager] 2026-02-19 02:50:58.180533 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:58.180541 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:58.180549 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:58.180557 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:58.180565 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:58.180573 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:58.180588 | orchestrator | 2026-02-19 02:50:58.180597 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-19 02:50:58.180605 | orchestrator | Thursday 19 February 2026 02:50:43 +0000 (0:00:01.193) 0:00:50.909 ***** 2026-02-19 02:50:58.180613 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:50:58.180621 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:50:58.180629 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:50:58.180637 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:50:58.180645 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:50:58.180652 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:50:58.180660 | orchestrator | changed: [testbed-manager] 2026-02-19 02:50:58.180668 | orchestrator | 2026-02-19 02:50:58.180676 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-19 02:50:58.180684 | orchestrator | Thursday 19 February 2026 02:50:54 +0000 (0:00:11.345) 0:01:02.255 ***** 2026-02-19 02:50:58.180692 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.180700 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.180708 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.180715 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.180723 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.180731 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.180739 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.180747 | orchestrator | 2026-02-19 02:50:58.180755 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-19 02:50:58.180763 | orchestrator | Thursday 19 February 2026 02:50:55 +0000 (0:00:00.796) 0:01:03.051 ***** 2026-02-19 02:50:58.180771 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.180779 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.180786 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.180794 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.180802 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.180810 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.180817 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.180825 | orchestrator | 2026-02-19 02:50:58.180833 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-19 02:50:58.180841 | orchestrator | Thursday 19 February 2026 02:50:57 +0000 (0:00:01.735) 0:01:04.787 ***** 2026-02-19 02:50:58.180855 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.180864 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.180872 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.180879 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.180887 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.180895 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.180903 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.180911 | orchestrator | 2026-02-19 02:50:58.180919 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-19 02:50:58.180927 | orchestrator | Thursday 19 February 2026 02:50:57 +0000 (0:00:00.237) 0:01:05.025 ***** 2026-02-19 02:50:58.180935 | orchestrator | ok: [testbed-manager] 2026-02-19 02:50:58.180943 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:50:58.180950 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:50:58.180958 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:50:58.180966 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:50:58.180974 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:50:58.180981 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:50:58.180989 | orchestrator | 2026-02-19 02:50:58.180997 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-19 02:50:58.181005 | orchestrator | Thursday 19 February 2026 02:50:57 +0000 (0:00:00.244) 0:01:05.269 ***** 2026-02-19 02:50:58.181014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:50:58.181022 | orchestrator | 2026-02-19 02:50:58.181036 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-19 02:53:22.905114 | orchestrator | Thursday 19 February 2026 02:50:58 +0000 (0:00:00.288) 0:01:05.558 ***** 2026-02-19 02:53:22.905213 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:22.905226 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:22.905235 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:22.905243 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:22.905251 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:22.905259 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:22.905267 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:22.905276 | orchestrator | 2026-02-19 02:53:22.905285 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-19 02:53:22.905294 | orchestrator | Thursday 19 February 2026 02:51:00 +0000 (0:00:02.036) 0:01:07.595 ***** 2026-02-19 02:53:22.905302 | orchestrator | changed: [testbed-manager] 2026-02-19 02:53:22.905312 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:53:22.905320 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:53:22.905328 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:53:22.905336 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:53:22.905356 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:53:22.905364 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:53:22.905372 | orchestrator | 2026-02-19 02:53:22.905381 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-19 02:53:22.905390 | orchestrator | Thursday 19 February 2026 02:51:00 +0000 (0:00:00.605) 0:01:08.201 ***** 2026-02-19 02:53:22.905398 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:22.905406 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:22.905414 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:22.905422 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:22.905430 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:22.905438 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:22.905446 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:22.905454 | orchestrator | 2026-02-19 02:53:22.905462 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-19 02:53:22.905471 | orchestrator | Thursday 19 February 2026 02:51:01 +0000 (0:00:00.231) 0:01:08.433 ***** 2026-02-19 02:53:22.905531 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:22.905542 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:22.905550 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:22.905558 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:22.905566 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:22.905573 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:22.905581 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:22.905589 | orchestrator | 2026-02-19 02:53:22.905598 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-19 02:53:22.905606 | orchestrator | Thursday 19 February 2026 02:51:02 +0000 (0:00:01.322) 0:01:09.755 ***** 2026-02-19 02:53:22.905614 | orchestrator | changed: [testbed-manager] 2026-02-19 02:53:22.905622 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:53:22.905630 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:53:22.905638 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:53:22.905649 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:53:22.905657 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:53:22.905666 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:53:22.905675 | orchestrator | 2026-02-19 02:53:22.905688 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-19 02:53:22.905697 | orchestrator | Thursday 19 February 2026 02:51:04 +0000 (0:00:02.564) 0:01:12.319 ***** 2026-02-19 02:53:22.905706 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:22.905715 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:22.905723 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:22.905732 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:22.905741 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:22.905749 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:22.905758 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:22.905767 | orchestrator | 2026-02-19 02:53:22.905776 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-19 02:53:22.905804 | orchestrator | Thursday 19 February 2026 02:51:07 +0000 (0:00:02.931) 0:01:15.251 ***** 2026-02-19 02:53:22.905813 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:22.905822 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:22.905830 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:22.905839 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:22.905848 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:22.905857 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:22.905865 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:22.905874 | orchestrator | 2026-02-19 02:53:22.905883 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-19 02:53:22.905892 | orchestrator | Thursday 19 February 2026 02:51:43 +0000 (0:00:35.288) 0:01:50.540 ***** 2026-02-19 02:53:22.905901 | orchestrator | changed: [testbed-manager] 2026-02-19 02:53:22.905910 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:53:22.905919 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:53:22.905928 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:53:22.905937 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:53:22.905945 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:53:22.905954 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:53:22.905963 | orchestrator | 2026-02-19 02:53:22.905972 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-19 02:53:22.905981 | orchestrator | Thursday 19 February 2026 02:53:07 +0000 (0:01:24.461) 0:03:15.001 ***** 2026-02-19 02:53:22.905993 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:22.906005 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:22.906076 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:22.906092 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:22.906105 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:22.906119 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:22.906134 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:22.906146 | orchestrator | 2026-02-19 02:53:22.906159 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-19 02:53:22.906168 | orchestrator | Thursday 19 February 2026 02:53:09 +0000 (0:00:01.979) 0:03:16.980 ***** 2026-02-19 02:53:22.906176 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:22.906184 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:22.906192 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:22.906200 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:22.906207 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:22.906215 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:22.906223 | orchestrator | changed: [testbed-manager] 2026-02-19 02:53:22.906231 | orchestrator | 2026-02-19 02:53:22.906239 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-19 02:53:22.906247 | orchestrator | Thursday 19 February 2026 02:53:21 +0000 (0:00:12.163) 0:03:29.144 ***** 2026-02-19 02:53:22.906289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-19 02:53:22.906316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-19 02:53:22.906338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-19 02:53:22.906348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-19 02:53:22.906357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-19 02:53:22.906365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-19 02:53:22.906373 | orchestrator | 2026-02-19 02:53:22.906381 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-19 02:53:22.906390 | orchestrator | Thursday 19 February 2026 02:53:22 +0000 (0:00:00.338) 0:03:29.483 ***** 2026-02-19 02:53:22.906398 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-19 02:53:22.906406 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-19 02:53:22.906414 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:53:22.906422 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-19 02:53:22.906430 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:53:22.906438 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:53:22.906450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-19 02:53:22.906458 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:53:22.906466 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 02:53:22.906474 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 02:53:22.906544 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 02:53:22.906554 | orchestrator | 2026-02-19 02:53:22.906562 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-19 02:53:22.906570 | orchestrator | Thursday 19 February 2026 02:53:22 +0000 (0:00:00.742) 0:03:30.225 ***** 2026-02-19 02:53:22.906578 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-19 02:53:22.906587 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-19 02:53:22.906595 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-19 02:53:22.906602 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-19 02:53:22.906610 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-19 02:53:22.906625 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-19 02:53:30.853570 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-19 02:53:30.853660 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-19 02:53:30.853690 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-19 02:53:30.853698 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-19 02:53:30.853706 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-19 02:53:30.853714 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-19 02:53:30.853721 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-19 02:53:30.853728 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-19 02:53:30.853735 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-19 02:53:30.853742 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-19 02:53:30.853750 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-19 02:53:30.853757 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-19 02:53:30.853764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-19 02:53:30.853771 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-19 02:53:30.853778 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-19 02:53:30.853786 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-19 02:53:30.853793 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-19 02:53:30.853801 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:53:30.853810 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-19 02:53:30.853817 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-19 02:53:30.853824 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-19 02:53:30.853831 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-19 02:53:30.853838 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-19 02:53:30.853846 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-19 02:53:30.853853 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-19 02:53:30.853860 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-19 02:53:30.853867 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-19 02:53:30.853875 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:53:30.853882 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-19 02:53:30.853890 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-19 02:53:30.853922 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-19 02:53:30.853929 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-19 02:53:30.853937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-19 02:53:30.853944 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-19 02:53:30.853951 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-19 02:53:30.853965 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-19 02:53:30.853972 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:53:30.853980 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:53:30.853987 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-19 02:53:30.853994 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-19 02:53:30.854001 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-19 02:53:30.854008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-19 02:53:30.854059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-19 02:53:30.854083 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-19 02:53:30.854092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-19 02:53:30.854100 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-19 02:53:30.854108 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-19 02:53:30.854116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-19 02:53:30.854124 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-19 02:53:30.854133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-19 02:53:30.854141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-19 02:53:30.854150 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-19 02:53:30.854158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-19 02:53:30.854166 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-19 02:53:30.854174 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-19 02:53:30.854182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-19 02:53:30.854190 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-19 02:53:30.854198 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-19 02:53:30.854206 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-19 02:53:30.854215 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-19 02:53:30.854223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-19 02:53:30.854231 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-19 02:53:30.854238 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-19 02:53:30.854245 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-19 02:53:30.854253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-19 02:53:30.854260 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-19 02:53:30.854267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-19 02:53:30.854275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-19 02:53:30.854288 | orchestrator | 2026-02-19 02:53:30.854296 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-19 02:53:30.854304 | orchestrator | Thursday 19 February 2026 02:53:28 +0000 (0:00:05.935) 0:03:36.161 ***** 2026-02-19 02:53:30.854311 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-19 02:53:30.854319 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-19 02:53:30.854326 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-19 02:53:30.854333 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-19 02:53:30.854345 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-19 02:53:30.854352 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-19 02:53:30.854360 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-19 02:53:30.854367 | orchestrator | 2026-02-19 02:53:30.854374 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-19 02:53:30.854381 | orchestrator | Thursday 19 February 2026 02:53:30 +0000 (0:00:01.548) 0:03:37.709 ***** 2026-02-19 02:53:30.854388 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-19 02:53:30.854396 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:53:30.854403 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-19 02:53:30.854410 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-19 02:53:30.854417 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:53:30.854425 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:53:30.854432 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-19 02:53:30.854439 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:53:30.854446 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-19 02:53:30.854468 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-19 02:53:30.854481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-19 02:53:45.330968 | orchestrator | 2026-02-19 02:53:45.331081 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-19 02:53:45.331101 | orchestrator | Thursday 19 February 2026 02:53:30 +0000 (0:00:00.526) 0:03:38.235 ***** 2026-02-19 02:53:45.331116 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-19 02:53:45.331131 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:53:45.331147 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-19 02:53:45.331161 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:53:45.331174 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-19 02:53:45.331188 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-19 02:53:45.331202 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:53:45.331216 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:53:45.331230 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-19 02:53:45.331244 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-19 02:53:45.331258 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-19 02:53:45.331272 | orchestrator | 2026-02-19 02:53:45.331282 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-19 02:53:45.331311 | orchestrator | Thursday 19 February 2026 02:53:32 +0000 (0:00:01.628) 0:03:39.864 ***** 2026-02-19 02:53:45.331320 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-19 02:53:45.331328 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:53:45.331336 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-19 02:53:45.331349 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:53:45.331363 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-19 02:53:45.331376 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:53:45.331390 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-19 02:53:45.331403 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:53:45.331481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-19 02:53:45.331490 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-19 02:53:45.331498 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-19 02:53:45.331510 | orchestrator | 2026-02-19 02:53:45.331524 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-19 02:53:45.331538 | orchestrator | Thursday 19 February 2026 02:53:34 +0000 (0:00:01.602) 0:03:41.467 ***** 2026-02-19 02:53:45.331551 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:53:45.331566 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:53:45.331580 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:53:45.331592 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:53:45.331607 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:53:45.331616 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:53:45.331626 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:53:45.331635 | orchestrator | 2026-02-19 02:53:45.331644 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-19 02:53:45.331654 | orchestrator | Thursday 19 February 2026 02:53:34 +0000 (0:00:00.257) 0:03:41.724 ***** 2026-02-19 02:53:45.331663 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:45.331672 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:45.331682 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:45.331696 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:45.331710 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:45.331724 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:45.331737 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:45.331751 | orchestrator | 2026-02-19 02:53:45.331765 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-19 02:53:45.331778 | orchestrator | Thursday 19 February 2026 02:53:39 +0000 (0:00:04.871) 0:03:46.596 ***** 2026-02-19 02:53:45.331792 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-19 02:53:45.331807 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-19 02:53:45.331820 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:53:45.331834 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-19 02:53:45.331848 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:53:45.331861 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:53:45.331874 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-19 02:53:45.331883 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-19 02:53:45.331893 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:53:45.331902 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-19 02:53:45.331930 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:53:45.331944 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:53:45.331957 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-19 02:53:45.331970 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:53:45.331984 | orchestrator | 2026-02-19 02:53:45.332009 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-19 02:53:45.332022 | orchestrator | Thursday 19 February 2026 02:53:39 +0000 (0:00:00.295) 0:03:46.892 ***** 2026-02-19 02:53:45.332035 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-19 02:53:45.332049 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-19 02:53:45.332063 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-19 02:53:45.332097 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-19 02:53:45.332110 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-19 02:53:45.332119 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-19 02:53:45.332126 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-19 02:53:45.332134 | orchestrator | 2026-02-19 02:53:45.332142 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-19 02:53:45.332150 | orchestrator | Thursday 19 February 2026 02:53:40 +0000 (0:00:01.187) 0:03:48.079 ***** 2026-02-19 02:53:45.332159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:53:45.332174 | orchestrator | 2026-02-19 02:53:45.332188 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-19 02:53:45.332202 | orchestrator | Thursday 19 February 2026 02:53:41 +0000 (0:00:00.374) 0:03:48.453 ***** 2026-02-19 02:53:45.332216 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:45.332229 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:45.332242 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:45.332255 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:45.332268 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:45.332281 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:45.332294 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:45.332308 | orchestrator | 2026-02-19 02:53:45.332321 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-19 02:53:45.332334 | orchestrator | Thursday 19 February 2026 02:53:42 +0000 (0:00:01.410) 0:03:49.864 ***** 2026-02-19 02:53:45.332346 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:45.332359 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:45.332372 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:45.332385 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:45.332397 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:45.332444 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:45.332460 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:45.332474 | orchestrator | 2026-02-19 02:53:45.332487 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-19 02:53:45.332500 | orchestrator | Thursday 19 February 2026 02:53:43 +0000 (0:00:00.633) 0:03:50.498 ***** 2026-02-19 02:53:45.332513 | orchestrator | changed: [testbed-manager] 2026-02-19 02:53:45.332527 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:53:45.332540 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:53:45.332554 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:53:45.332567 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:53:45.332580 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:53:45.332591 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:53:45.332599 | orchestrator | 2026-02-19 02:53:45.332607 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-19 02:53:45.332615 | orchestrator | Thursday 19 February 2026 02:53:43 +0000 (0:00:00.592) 0:03:51.091 ***** 2026-02-19 02:53:45.332623 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:45.332631 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:45.332639 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:45.332647 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:45.332656 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:45.332670 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:45.332684 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:45.332697 | orchestrator | 2026-02-19 02:53:45.332711 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-19 02:53:45.332735 | orchestrator | Thursday 19 February 2026 02:53:44 +0000 (0:00:00.604) 0:03:51.695 ***** 2026-02-19 02:53:45.332758 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771468213.5677624, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:45.332776 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771468239.7890728, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:45.332791 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771468234.1017313, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:45.332831 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771468242.2178495, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.351836 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771468248.298622, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.351928 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771468236.341694, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.351941 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771468239.8910294, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.351974 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.351996 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.352005 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.352014 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.352046 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.352077 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.352086 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 02:53:50.352102 | orchestrator | 2026-02-19 02:53:50.352114 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-19 02:53:50.352124 | orchestrator | Thursday 19 February 2026 02:53:45 +0000 (0:00:01.012) 0:03:52.707 ***** 2026-02-19 02:53:50.352133 | orchestrator | changed: [testbed-manager] 2026-02-19 02:53:50.352144 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:53:50.352152 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:53:50.352161 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:53:50.352170 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:53:50.352179 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:53:50.352187 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:53:50.352196 | orchestrator | 2026-02-19 02:53:50.352205 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-19 02:53:50.352214 | orchestrator | Thursday 19 February 2026 02:53:46 +0000 (0:00:01.232) 0:03:53.940 ***** 2026-02-19 02:53:50.352222 | orchestrator | changed: [testbed-manager] 2026-02-19 02:53:50.352231 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:53:50.352240 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:53:50.352248 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:53:50.352257 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:53:50.352265 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:53:50.352274 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:53:50.352283 | orchestrator | 2026-02-19 02:53:50.352302 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-19 02:53:50.352312 | orchestrator | Thursday 19 February 2026 02:53:47 +0000 (0:00:01.206) 0:03:55.146 ***** 2026-02-19 02:53:50.352320 | orchestrator | changed: [testbed-manager] 2026-02-19 02:53:50.352329 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:53:50.352338 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:53:50.352346 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:53:50.352355 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:53:50.352364 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:53:50.352374 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:53:50.352384 | orchestrator | 2026-02-19 02:53:50.352415 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-19 02:53:50.352426 | orchestrator | Thursday 19 February 2026 02:53:48 +0000 (0:00:01.169) 0:03:56.316 ***** 2026-02-19 02:53:50.352436 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:53:50.352447 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:53:50.352456 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:53:50.352466 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:53:50.352475 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:53:50.352485 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:53:50.352494 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:53:50.352504 | orchestrator | 2026-02-19 02:53:50.352515 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-19 02:53:50.352525 | orchestrator | Thursday 19 February 2026 02:53:49 +0000 (0:00:00.234) 0:03:56.551 ***** 2026-02-19 02:53:50.352534 | orchestrator | ok: [testbed-manager] 2026-02-19 02:53:50.352545 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:53:50.352555 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:53:50.352565 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:53:50.352575 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:53:50.352584 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:53:50.352594 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:53:50.352603 | orchestrator | 2026-02-19 02:53:50.352614 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-19 02:53:50.352623 | orchestrator | Thursday 19 February 2026 02:53:49 +0000 (0:00:00.723) 0:03:57.275 ***** 2026-02-19 02:53:50.352634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:53:50.352650 | orchestrator | 2026-02-19 02:53:50.352659 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-19 02:53:50.352673 | orchestrator | Thursday 19 February 2026 02:53:50 +0000 (0:00:00.449) 0:03:57.724 ***** 2026-02-19 02:55:09.474424 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:09.474536 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:55:09.474561 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:55:09.474571 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:55:09.474581 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:55:09.474590 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:55:09.474598 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:55:09.474608 | orchestrator | 2026-02-19 02:55:09.474618 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-19 02:55:09.474628 | orchestrator | Thursday 19 February 2026 02:53:59 +0000 (0:00:09.209) 0:04:06.934 ***** 2026-02-19 02:55:09.474637 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:09.474646 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:09.474655 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:09.474663 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:09.474672 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:09.474681 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:09.474693 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:09.474708 | orchestrator | 2026-02-19 02:55:09.474724 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-19 02:55:09.474738 | orchestrator | Thursday 19 February 2026 02:54:00 +0000 (0:00:01.374) 0:04:08.308 ***** 2026-02-19 02:55:09.474752 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:09.474767 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:09.474782 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:09.474795 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:09.474811 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:09.474827 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:09.474843 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:09.474859 | orchestrator | 2026-02-19 02:55:09.474871 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-19 02:55:09.474880 | orchestrator | Thursday 19 February 2026 02:54:02 +0000 (0:00:01.290) 0:04:09.599 ***** 2026-02-19 02:55:09.474889 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:09.474897 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:09.474906 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:09.474915 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:09.474924 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:09.474933 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:09.474941 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:09.474950 | orchestrator | 2026-02-19 02:55:09.474959 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-19 02:55:09.474969 | orchestrator | Thursday 19 February 2026 02:54:02 +0000 (0:00:00.267) 0:04:09.867 ***** 2026-02-19 02:55:09.474978 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:09.474986 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:09.474995 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:09.475003 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:09.475012 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:09.475020 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:09.475029 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:09.475038 | orchestrator | 2026-02-19 02:55:09.475046 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-19 02:55:09.475055 | orchestrator | Thursday 19 February 2026 02:54:02 +0000 (0:00:00.307) 0:04:10.174 ***** 2026-02-19 02:55:09.475064 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:09.475073 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:09.475081 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:09.475112 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:09.475122 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:09.475130 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:09.475139 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:09.475148 | orchestrator | 2026-02-19 02:55:09.475156 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-19 02:55:09.475165 | orchestrator | Thursday 19 February 2026 02:54:03 +0000 (0:00:00.275) 0:04:10.449 ***** 2026-02-19 02:55:09.475174 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:09.475207 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:09.475217 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:09.475226 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:09.475234 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:09.475243 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:09.475251 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:09.475260 | orchestrator | 2026-02-19 02:55:09.475269 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-19 02:55:09.475277 | orchestrator | Thursday 19 February 2026 02:54:08 +0000 (0:00:05.052) 0:04:15.502 ***** 2026-02-19 02:55:09.475287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:55:09.475299 | orchestrator | 2026-02-19 02:55:09.475308 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-19 02:55:09.475316 | orchestrator | Thursday 19 February 2026 02:54:08 +0000 (0:00:00.449) 0:04:15.951 ***** 2026-02-19 02:55:09.475325 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-19 02:55:09.475333 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-19 02:55:09.475342 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-19 02:55:09.475351 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:55:09.475360 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-19 02:55:09.475385 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-19 02:55:09.475394 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-19 02:55:09.475403 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:55:09.475412 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-19 02:55:09.475420 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-19 02:55:09.475429 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:55:09.475438 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-19 02:55:09.475446 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-19 02:55:09.475455 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:55:09.475464 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-19 02:55:09.475472 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-19 02:55:09.475497 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:55:09.475507 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:55:09.475515 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-19 02:55:09.475524 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-19 02:55:09.475532 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:55:09.475541 | orchestrator | 2026-02-19 02:55:09.475550 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-19 02:55:09.475559 | orchestrator | Thursday 19 February 2026 02:54:08 +0000 (0:00:00.322) 0:04:16.274 ***** 2026-02-19 02:55:09.475568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:55:09.475577 | orchestrator | 2026-02-19 02:55:09.475585 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-19 02:55:09.475603 | orchestrator | Thursday 19 February 2026 02:54:09 +0000 (0:00:00.379) 0:04:16.653 ***** 2026-02-19 02:55:09.475612 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-19 02:55:09.475620 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-19 02:55:09.475629 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:55:09.475638 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-19 02:55:09.475646 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:55:09.475655 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-19 02:55:09.475663 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:55:09.475672 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-19 02:55:09.475681 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:55:09.475689 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-19 02:55:09.475698 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:55:09.475707 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:55:09.475715 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-19 02:55:09.475724 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:55:09.475732 | orchestrator | 2026-02-19 02:55:09.475741 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-19 02:55:09.475750 | orchestrator | Thursday 19 February 2026 02:54:09 +0000 (0:00:00.333) 0:04:16.987 ***** 2026-02-19 02:55:09.475759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:55:09.475768 | orchestrator | 2026-02-19 02:55:09.475776 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-19 02:55:09.475785 | orchestrator | Thursday 19 February 2026 02:54:09 +0000 (0:00:00.383) 0:04:17.370 ***** 2026-02-19 02:55:09.475793 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:55:09.475802 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:55:09.475811 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:55:09.475820 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:55:09.475834 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:55:09.475843 | orchestrator | changed: [testbed-manager] 2026-02-19 02:55:09.475852 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:55:09.475860 | orchestrator | 2026-02-19 02:55:09.475869 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-19 02:55:09.475878 | orchestrator | Thursday 19 February 2026 02:54:42 +0000 (0:00:32.875) 0:04:50.246 ***** 2026-02-19 02:55:09.475887 | orchestrator | changed: [testbed-manager] 2026-02-19 02:55:09.475895 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:55:09.475904 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:55:09.475912 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:55:09.475921 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:55:09.475929 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:55:09.475938 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:55:09.475946 | orchestrator | 2026-02-19 02:55:09.475955 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-19 02:55:09.475964 | orchestrator | Thursday 19 February 2026 02:54:51 +0000 (0:00:09.040) 0:04:59.286 ***** 2026-02-19 02:55:09.475972 | orchestrator | changed: [testbed-manager] 2026-02-19 02:55:09.475981 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:55:09.475990 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:55:09.475998 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:55:09.476007 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:55:09.476015 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:55:09.476024 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:55:09.476032 | orchestrator | 2026-02-19 02:55:09.476041 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-19 02:55:09.476056 | orchestrator | Thursday 19 February 2026 02:55:00 +0000 (0:00:08.412) 0:05:07.699 ***** 2026-02-19 02:55:09.476065 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:09.476073 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:09.476082 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:09.476091 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:09.476099 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:09.476108 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:09.476116 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:09.476125 | orchestrator | 2026-02-19 02:55:09.476134 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-19 02:55:09.476142 | orchestrator | Thursday 19 February 2026 02:55:02 +0000 (0:00:01.941) 0:05:09.640 ***** 2026-02-19 02:55:09.476151 | orchestrator | changed: [testbed-manager] 2026-02-19 02:55:09.476160 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:55:09.476168 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:55:09.476177 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:55:09.476203 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:55:09.476212 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:55:09.476221 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:55:09.476230 | orchestrator | 2026-02-19 02:55:09.476244 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-19 02:55:20.977335 | orchestrator | Thursday 19 February 2026 02:55:09 +0000 (0:00:07.206) 0:05:16.847 ***** 2026-02-19 02:55:20.977426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:55:20.977439 | orchestrator | 2026-02-19 02:55:20.977449 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-19 02:55:20.977457 | orchestrator | Thursday 19 February 2026 02:55:09 +0000 (0:00:00.459) 0:05:17.306 ***** 2026-02-19 02:55:20.977465 | orchestrator | changed: [testbed-manager] 2026-02-19 02:55:20.977474 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:55:20.977482 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:55:20.977489 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:55:20.977496 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:55:20.977504 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:55:20.977511 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:55:20.977518 | orchestrator | 2026-02-19 02:55:20.977526 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-19 02:55:20.977534 | orchestrator | Thursday 19 February 2026 02:55:10 +0000 (0:00:00.762) 0:05:18.069 ***** 2026-02-19 02:55:20.977546 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:20.977559 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:20.977570 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:20.977580 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:20.977592 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:20.977603 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:20.977615 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:20.977628 | orchestrator | 2026-02-19 02:55:20.977640 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-19 02:55:20.977651 | orchestrator | Thursday 19 February 2026 02:55:12 +0000 (0:00:02.177) 0:05:20.246 ***** 2026-02-19 02:55:20.977663 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:55:20.977676 | orchestrator | changed: [testbed-manager] 2026-02-19 02:55:20.977688 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:55:20.977697 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:55:20.977704 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:55:20.977712 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:55:20.977719 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:55:20.977727 | orchestrator | 2026-02-19 02:55:20.977734 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-19 02:55:20.977741 | orchestrator | Thursday 19 February 2026 02:55:13 +0000 (0:00:00.860) 0:05:21.107 ***** 2026-02-19 02:55:20.977767 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:55:20.977775 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:55:20.977782 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:55:20.977789 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:55:20.977796 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:55:20.977804 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:55:20.977813 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:55:20.977825 | orchestrator | 2026-02-19 02:55:20.977837 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-19 02:55:20.977848 | orchestrator | Thursday 19 February 2026 02:55:13 +0000 (0:00:00.268) 0:05:21.375 ***** 2026-02-19 02:55:20.977859 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:55:20.977870 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:55:20.977883 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:55:20.977911 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:55:20.977925 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:55:20.977932 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:55:20.977939 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:55:20.977946 | orchestrator | 2026-02-19 02:55:20.977954 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-19 02:55:20.977961 | orchestrator | Thursday 19 February 2026 02:55:14 +0000 (0:00:00.367) 0:05:21.743 ***** 2026-02-19 02:55:20.977968 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:20.977976 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:20.977983 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:20.977990 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:20.977997 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:20.978004 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:20.978011 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:20.978070 | orchestrator | 2026-02-19 02:55:20.978078 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-19 02:55:20.978086 | orchestrator | Thursday 19 February 2026 02:55:14 +0000 (0:00:00.278) 0:05:22.022 ***** 2026-02-19 02:55:20.978093 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:55:20.978100 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:55:20.978108 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:55:20.978115 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:55:20.978122 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:55:20.978130 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:55:20.978137 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:55:20.978197 | orchestrator | 2026-02-19 02:55:20.978208 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-19 02:55:20.978216 | orchestrator | Thursday 19 February 2026 02:55:14 +0000 (0:00:00.246) 0:05:22.268 ***** 2026-02-19 02:55:20.978223 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:20.978231 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:20.978238 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:20.978245 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:20.978253 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:20.978260 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:20.978267 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:20.978275 | orchestrator | 2026-02-19 02:55:20.978286 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-19 02:55:20.978300 | orchestrator | Thursday 19 February 2026 02:55:15 +0000 (0:00:00.288) 0:05:22.556 ***** 2026-02-19 02:55:20.978318 | orchestrator | ok: [testbed-manager] =>  2026-02-19 02:55:20.978333 | orchestrator |  docker_version: 5:27.5.1 2026-02-19 02:55:20.978345 | orchestrator | ok: [testbed-node-3] =>  2026-02-19 02:55:20.978355 | orchestrator |  docker_version: 5:27.5.1 2026-02-19 02:55:20.978368 | orchestrator | ok: [testbed-node-4] =>  2026-02-19 02:55:20.978379 | orchestrator |  docker_version: 5:27.5.1 2026-02-19 02:55:20.978392 | orchestrator | ok: [testbed-node-5] =>  2026-02-19 02:55:20.978403 | orchestrator |  docker_version: 5:27.5.1 2026-02-19 02:55:20.978432 | orchestrator | ok: [testbed-node-0] =>  2026-02-19 02:55:20.978456 | orchestrator |  docker_version: 5:27.5.1 2026-02-19 02:55:20.978468 | orchestrator | ok: [testbed-node-1] =>  2026-02-19 02:55:20.978480 | orchestrator |  docker_version: 5:27.5.1 2026-02-19 02:55:20.978491 | orchestrator | ok: [testbed-node-2] =>  2026-02-19 02:55:20.978504 | orchestrator |  docker_version: 5:27.5.1 2026-02-19 02:55:20.978516 | orchestrator | 2026-02-19 02:55:20.978528 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-19 02:55:20.978539 | orchestrator | Thursday 19 February 2026 02:55:15 +0000 (0:00:00.270) 0:05:22.827 ***** 2026-02-19 02:55:20.978547 | orchestrator | ok: [testbed-manager] =>  2026-02-19 02:55:20.978554 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-19 02:55:20.978561 | orchestrator | ok: [testbed-node-3] =>  2026-02-19 02:55:20.978568 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-19 02:55:20.978575 | orchestrator | ok: [testbed-node-4] =>  2026-02-19 02:55:20.978583 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-19 02:55:20.978590 | orchestrator | ok: [testbed-node-5] =>  2026-02-19 02:55:20.978597 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-19 02:55:20.978604 | orchestrator | ok: [testbed-node-0] =>  2026-02-19 02:55:20.978611 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-19 02:55:20.978618 | orchestrator | ok: [testbed-node-1] =>  2026-02-19 02:55:20.978625 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-19 02:55:20.978632 | orchestrator | ok: [testbed-node-2] =>  2026-02-19 02:55:20.978640 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-19 02:55:20.978647 | orchestrator | 2026-02-19 02:55:20.978654 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-19 02:55:20.978662 | orchestrator | Thursday 19 February 2026 02:55:15 +0000 (0:00:00.264) 0:05:23.092 ***** 2026-02-19 02:55:20.978669 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:55:20.978676 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:55:20.978683 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:55:20.978690 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:55:20.978697 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:55:20.978704 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:55:20.978711 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:55:20.978719 | orchestrator | 2026-02-19 02:55:20.978726 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-19 02:55:20.978733 | orchestrator | Thursday 19 February 2026 02:55:15 +0000 (0:00:00.239) 0:05:23.331 ***** 2026-02-19 02:55:20.978740 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:55:20.978747 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:55:20.978754 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:55:20.978762 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:55:20.978769 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:55:20.978776 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:55:20.978783 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:55:20.978790 | orchestrator | 2026-02-19 02:55:20.978797 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-19 02:55:20.978804 | orchestrator | Thursday 19 February 2026 02:55:16 +0000 (0:00:00.250) 0:05:23.581 ***** 2026-02-19 02:55:20.978813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:55:20.978823 | orchestrator | 2026-02-19 02:55:20.978837 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-19 02:55:20.978844 | orchestrator | Thursday 19 February 2026 02:55:16 +0000 (0:00:00.395) 0:05:23.977 ***** 2026-02-19 02:55:20.978852 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:20.978860 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:20.978872 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:20.978884 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:20.978894 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:20.978914 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:20.978926 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:20.978939 | orchestrator | 2026-02-19 02:55:20.978951 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-19 02:55:20.978963 | orchestrator | Thursday 19 February 2026 02:55:17 +0000 (0:00:00.977) 0:05:24.954 ***** 2026-02-19 02:55:20.978975 | orchestrator | ok: [testbed-manager] 2026-02-19 02:55:20.978983 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:55:20.978990 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:55:20.978997 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:55:20.979004 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:55:20.979011 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:55:20.979018 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:55:20.979025 | orchestrator | 2026-02-19 02:55:20.979032 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-19 02:55:20.979041 | orchestrator | Thursday 19 February 2026 02:55:20 +0000 (0:00:03.017) 0:05:27.972 ***** 2026-02-19 02:55:20.979053 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-19 02:55:20.979069 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-19 02:55:20.979085 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-19 02:55:20.979096 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-19 02:55:20.979108 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-19 02:55:20.979120 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-19 02:55:20.979131 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:55:20.979142 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-19 02:55:20.979153 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-19 02:55:20.979189 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-19 02:55:20.979200 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:55:20.979211 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-19 02:55:20.979222 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-19 02:55:20.979234 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-19 02:55:20.979246 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:55:20.979258 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-19 02:55:20.979282 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-19 02:56:25.692546 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-19 02:56:25.692664 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:56:25.692681 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-19 02:56:25.692694 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-19 02:56:25.692706 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-19 02:56:25.692716 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:56:25.692727 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:56:25.692738 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-19 02:56:25.692749 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-19 02:56:25.692760 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-19 02:56:25.692771 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:56:25.692782 | orchestrator | 2026-02-19 02:56:25.692795 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-19 02:56:25.692807 | orchestrator | Thursday 19 February 2026 02:55:21 +0000 (0:00:00.595) 0:05:28.567 ***** 2026-02-19 02:56:25.692818 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.692829 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.692840 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.692851 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.692862 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.692873 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.692915 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.692937 | orchestrator | 2026-02-19 02:56:25.692966 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-19 02:56:25.692983 | orchestrator | Thursday 19 February 2026 02:55:28 +0000 (0:00:07.200) 0:05:35.768 ***** 2026-02-19 02:56:25.693001 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.693048 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.693068 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.693084 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.693100 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.693118 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.693135 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.693154 | orchestrator | 2026-02-19 02:56:25.693174 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-19 02:56:25.693192 | orchestrator | Thursday 19 February 2026 02:55:29 +0000 (0:00:01.105) 0:05:36.873 ***** 2026-02-19 02:56:25.693208 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.693219 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.693229 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.693240 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.693251 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.693262 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.693272 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.693283 | orchestrator | 2026-02-19 02:56:25.693294 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-19 02:56:25.693305 | orchestrator | Thursday 19 February 2026 02:55:38 +0000 (0:00:08.858) 0:05:45.731 ***** 2026-02-19 02:56:25.693316 | orchestrator | changed: [testbed-manager] 2026-02-19 02:56:25.693327 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.693338 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.693349 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.693359 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.693370 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.693381 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.693392 | orchestrator | 2026-02-19 02:56:25.693403 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-19 02:56:25.693415 | orchestrator | Thursday 19 February 2026 02:55:41 +0000 (0:00:03.224) 0:05:48.956 ***** 2026-02-19 02:56:25.693426 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.693437 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.693448 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.693459 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.693469 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.693480 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.693491 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.693501 | orchestrator | 2026-02-19 02:56:25.693512 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-19 02:56:25.693523 | orchestrator | Thursday 19 February 2026 02:55:42 +0000 (0:00:01.291) 0:05:50.248 ***** 2026-02-19 02:56:25.693534 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.693545 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.693556 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.693566 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.693577 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.693588 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.693599 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.693616 | orchestrator | 2026-02-19 02:56:25.693635 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-19 02:56:25.693653 | orchestrator | Thursday 19 February 2026 02:55:44 +0000 (0:00:01.502) 0:05:51.750 ***** 2026-02-19 02:56:25.693670 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:56:25.693688 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:56:25.693704 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:56:25.693722 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:56:25.693756 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:56:25.693773 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:56:25.693791 | orchestrator | changed: [testbed-manager] 2026-02-19 02:56:25.693808 | orchestrator | 2026-02-19 02:56:25.693826 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-19 02:56:25.693845 | orchestrator | Thursday 19 February 2026 02:55:44 +0000 (0:00:00.578) 0:05:52.329 ***** 2026-02-19 02:56:25.693864 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.693884 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.693903 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.693915 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.693926 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.693936 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.693947 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.693958 | orchestrator | 2026-02-19 02:56:25.693969 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-19 02:56:25.694000 | orchestrator | Thursday 19 February 2026 02:55:55 +0000 (0:00:10.538) 0:06:02.867 ***** 2026-02-19 02:56:25.694012 | orchestrator | changed: [testbed-manager] 2026-02-19 02:56:25.694139 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.694151 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.694162 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.694173 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.694183 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.694194 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.694204 | orchestrator | 2026-02-19 02:56:25.694216 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-19 02:56:25.694227 | orchestrator | Thursday 19 February 2026 02:55:56 +0000 (0:00:00.920) 0:06:03.787 ***** 2026-02-19 02:56:25.694238 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.694249 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.694259 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.694270 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.694281 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.694292 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.694302 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.694313 | orchestrator | 2026-02-19 02:56:25.694337 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-19 02:56:25.694348 | orchestrator | Thursday 19 February 2026 02:56:06 +0000 (0:00:10.111) 0:06:13.899 ***** 2026-02-19 02:56:25.694359 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.694369 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.694380 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.694391 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.694402 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.694412 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.694423 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.694434 | orchestrator | 2026-02-19 02:56:25.694445 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-19 02:56:25.694456 | orchestrator | Thursday 19 February 2026 02:56:18 +0000 (0:00:12.175) 0:06:26.075 ***** 2026-02-19 02:56:25.694467 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-19 02:56:25.694478 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-19 02:56:25.694489 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-19 02:56:25.694499 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-19 02:56:25.694510 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-19 02:56:25.694521 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-19 02:56:25.694532 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-19 02:56:25.694543 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-19 02:56:25.694553 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-19 02:56:25.694575 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-19 02:56:25.694586 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-19 02:56:25.694644 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-19 02:56:25.694656 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-19 02:56:25.694667 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-19 02:56:25.694678 | orchestrator | 2026-02-19 02:56:25.694689 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-19 02:56:25.694700 | orchestrator | Thursday 19 February 2026 02:56:19 +0000 (0:00:01.169) 0:06:27.245 ***** 2026-02-19 02:56:25.694716 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:56:25.694727 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:56:25.694737 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:56:25.694748 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:56:25.694759 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:56:25.694770 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:56:25.694787 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:56:25.694806 | orchestrator | 2026-02-19 02:56:25.694818 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-19 02:56:25.694829 | orchestrator | Thursday 19 February 2026 02:56:20 +0000 (0:00:00.424) 0:06:27.669 ***** 2026-02-19 02:56:25.694840 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:25.694850 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:25.694861 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:25.694872 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:25.694883 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:25.694893 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:25.694904 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:25.694915 | orchestrator | 2026-02-19 02:56:25.694926 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-19 02:56:25.694938 | orchestrator | Thursday 19 February 2026 02:56:24 +0000 (0:00:04.484) 0:06:32.154 ***** 2026-02-19 02:56:25.694949 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:56:25.694960 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:56:25.694970 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:56:25.694981 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:56:25.694992 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:56:25.695002 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:56:25.695013 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:56:25.695086 | orchestrator | 2026-02-19 02:56:25.695099 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-19 02:56:25.695110 | orchestrator | Thursday 19 February 2026 02:56:25 +0000 (0:00:00.483) 0:06:32.638 ***** 2026-02-19 02:56:25.695121 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-19 02:56:25.695133 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-19 02:56:25.695143 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:56:25.695154 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-19 02:56:25.695165 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-19 02:56:25.695176 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:56:25.695187 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-19 02:56:25.695198 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-19 02:56:25.695209 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:56:25.695232 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-19 02:56:44.301137 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-19 02:56:44.301220 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:56:44.301231 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-19 02:56:44.301239 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-19 02:56:44.301247 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:56:44.301279 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-19 02:56:44.301289 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-19 02:56:44.301297 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:56:44.301304 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-19 02:56:44.301312 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-19 02:56:44.301329 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:56:44.301336 | orchestrator | 2026-02-19 02:56:44.301353 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-19 02:56:44.301363 | orchestrator | Thursday 19 February 2026 02:56:25 +0000 (0:00:00.689) 0:06:33.327 ***** 2026-02-19 02:56:44.301371 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:56:44.301378 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:56:44.301388 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:56:44.301392 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:56:44.301397 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:56:44.301402 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:56:44.301406 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:56:44.301411 | orchestrator | 2026-02-19 02:56:44.301416 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-19 02:56:44.301421 | orchestrator | Thursday 19 February 2026 02:56:26 +0000 (0:00:00.483) 0:06:33.811 ***** 2026-02-19 02:56:44.301426 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:56:44.301430 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:56:44.301435 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:56:44.301439 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:56:44.301444 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:56:44.301448 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:56:44.301453 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:56:44.301457 | orchestrator | 2026-02-19 02:56:44.301462 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-19 02:56:44.301466 | orchestrator | Thursday 19 February 2026 02:56:26 +0000 (0:00:00.476) 0:06:34.288 ***** 2026-02-19 02:56:44.301471 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:56:44.301475 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:56:44.301480 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:56:44.301484 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:56:44.301489 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:56:44.301493 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:56:44.301497 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:56:44.301502 | orchestrator | 2026-02-19 02:56:44.301506 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-19 02:56:44.301511 | orchestrator | Thursday 19 February 2026 02:56:27 +0000 (0:00:00.513) 0:06:34.801 ***** 2026-02-19 02:56:44.301515 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.301520 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:56:44.301525 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:56:44.301532 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:56:44.301540 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:56:44.301547 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:56:44.301554 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:56:44.301561 | orchestrator | 2026-02-19 02:56:44.301568 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-19 02:56:44.301576 | orchestrator | Thursday 19 February 2026 02:56:29 +0000 (0:00:02.013) 0:06:36.814 ***** 2026-02-19 02:56:44.301584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:56:44.301593 | orchestrator | 2026-02-19 02:56:44.301600 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-19 02:56:44.301608 | orchestrator | Thursday 19 February 2026 02:56:30 +0000 (0:00:00.790) 0:06:37.605 ***** 2026-02-19 02:56:44.301627 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.301635 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:44.301643 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:44.301651 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:44.301658 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:44.301665 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:44.301673 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:44.301680 | orchestrator | 2026-02-19 02:56:44.301687 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-19 02:56:44.301695 | orchestrator | Thursday 19 February 2026 02:56:31 +0000 (0:00:00.842) 0:06:38.447 ***** 2026-02-19 02:56:44.301701 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.301706 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:44.301711 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:44.301717 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:44.301722 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:44.301727 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:44.301732 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:44.301738 | orchestrator | 2026-02-19 02:56:44.301743 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-19 02:56:44.301748 | orchestrator | Thursday 19 February 2026 02:56:31 +0000 (0:00:00.849) 0:06:39.296 ***** 2026-02-19 02:56:44.301754 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.301759 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:44.301767 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:44.301775 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:44.301782 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:44.301790 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:44.301797 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:44.301804 | orchestrator | 2026-02-19 02:56:44.301812 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-19 02:56:44.301837 | orchestrator | Thursday 19 February 2026 02:56:33 +0000 (0:00:01.533) 0:06:40.829 ***** 2026-02-19 02:56:44.301846 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:56:44.301853 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:56:44.301862 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:56:44.301868 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:56:44.301873 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:56:44.301881 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:56:44.301888 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:56:44.301896 | orchestrator | 2026-02-19 02:56:44.301904 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-19 02:56:44.301912 | orchestrator | Thursday 19 February 2026 02:56:34 +0000 (0:00:01.373) 0:06:42.203 ***** 2026-02-19 02:56:44.301920 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.301927 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:44.301935 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:44.301943 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:44.301951 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:44.301961 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:44.301967 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:44.301972 | orchestrator | 2026-02-19 02:56:44.301978 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-19 02:56:44.302009 | orchestrator | Thursday 19 February 2026 02:56:36 +0000 (0:00:01.300) 0:06:43.503 ***** 2026-02-19 02:56:44.302057 | orchestrator | changed: [testbed-manager] 2026-02-19 02:56:44.302065 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:56:44.302073 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:56:44.302080 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:56:44.302088 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:56:44.302094 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:56:44.302101 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:56:44.302106 | orchestrator | 2026-02-19 02:56:44.302118 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-19 02:56:44.302123 | orchestrator | Thursday 19 February 2026 02:56:37 +0000 (0:00:01.426) 0:06:44.930 ***** 2026-02-19 02:56:44.302128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:56:44.302133 | orchestrator | 2026-02-19 02:56:44.302138 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-19 02:56:44.302142 | orchestrator | Thursday 19 February 2026 02:56:38 +0000 (0:00:01.007) 0:06:45.938 ***** 2026-02-19 02:56:44.302147 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.302152 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:56:44.302156 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:56:44.302161 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:56:44.302165 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:56:44.302170 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:56:44.302174 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:56:44.302179 | orchestrator | 2026-02-19 02:56:44.302183 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-19 02:56:44.302188 | orchestrator | Thursday 19 February 2026 02:56:39 +0000 (0:00:01.361) 0:06:47.300 ***** 2026-02-19 02:56:44.302192 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.302197 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:56:44.302201 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:56:44.302206 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:56:44.302210 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:56:44.302225 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:56:44.302230 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:56:44.302235 | orchestrator | 2026-02-19 02:56:44.302239 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-19 02:56:44.302244 | orchestrator | Thursday 19 February 2026 02:56:40 +0000 (0:00:01.054) 0:06:48.354 ***** 2026-02-19 02:56:44.302248 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.302253 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:56:44.302257 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:56:44.302262 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:56:44.302266 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:56:44.302271 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:56:44.302275 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:56:44.302280 | orchestrator | 2026-02-19 02:56:44.302284 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-19 02:56:44.302289 | orchestrator | Thursday 19 February 2026 02:56:42 +0000 (0:00:01.109) 0:06:49.464 ***** 2026-02-19 02:56:44.302293 | orchestrator | ok: [testbed-manager] 2026-02-19 02:56:44.302298 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:56:44.302302 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:56:44.302307 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:56:44.302311 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:56:44.302315 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:56:44.302320 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:56:44.302324 | orchestrator | 2026-02-19 02:56:44.302329 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-19 02:56:44.302333 | orchestrator | Thursday 19 February 2026 02:56:43 +0000 (0:00:01.216) 0:06:50.680 ***** 2026-02-19 02:56:44.302338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:56:44.302343 | orchestrator | 2026-02-19 02:56:44.302347 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-19 02:56:44.302352 | orchestrator | Thursday 19 February 2026 02:56:44 +0000 (0:00:00.749) 0:06:51.429 ***** 2026-02-19 02:56:44.302356 | orchestrator | 2026-02-19 02:56:44.302361 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-19 02:56:44.302369 | orchestrator | Thursday 19 February 2026 02:56:44 +0000 (0:00:00.035) 0:06:51.465 ***** 2026-02-19 02:56:44.302373 | orchestrator | 2026-02-19 02:56:44.302378 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-19 02:56:44.302382 | orchestrator | Thursday 19 February 2026 02:56:44 +0000 (0:00:00.038) 0:06:51.503 ***** 2026-02-19 02:56:44.302387 | orchestrator | 2026-02-19 02:56:44.302392 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-19 02:56:44.302403 | orchestrator | Thursday 19 February 2026 02:56:44 +0000 (0:00:00.035) 0:06:51.539 ***** 2026-02-19 02:57:10.268458 | orchestrator | 2026-02-19 02:57:10.269452 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-19 02:57:10.269485 | orchestrator | Thursday 19 February 2026 02:56:44 +0000 (0:00:00.034) 0:06:51.573 ***** 2026-02-19 02:57:10.269490 | orchestrator | 2026-02-19 02:57:10.269495 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-19 02:57:10.269499 | orchestrator | Thursday 19 February 2026 02:56:44 +0000 (0:00:00.038) 0:06:51.612 ***** 2026-02-19 02:57:10.269503 | orchestrator | 2026-02-19 02:57:10.269507 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-19 02:57:10.269511 | orchestrator | Thursday 19 February 2026 02:56:44 +0000 (0:00:00.034) 0:06:51.647 ***** 2026-02-19 02:57:10.269515 | orchestrator | 2026-02-19 02:57:10.269519 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-19 02:57:10.269523 | orchestrator | Thursday 19 February 2026 02:56:44 +0000 (0:00:00.036) 0:06:51.683 ***** 2026-02-19 02:57:10.269527 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:10.269532 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:10.269536 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:10.269540 | orchestrator | 2026-02-19 02:57:10.269543 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-19 02:57:10.269547 | orchestrator | Thursday 19 February 2026 02:56:45 +0000 (0:00:01.207) 0:06:52.890 ***** 2026-02-19 02:57:10.269551 | orchestrator | changed: [testbed-manager] 2026-02-19 02:57:10.269556 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:10.269560 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:10.269564 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:10.269567 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:10.269571 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:10.269575 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:10.269578 | orchestrator | 2026-02-19 02:57:10.269582 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-19 02:57:10.269586 | orchestrator | Thursday 19 February 2026 02:56:46 +0000 (0:00:01.417) 0:06:54.308 ***** 2026-02-19 02:57:10.269590 | orchestrator | changed: [testbed-manager] 2026-02-19 02:57:10.269593 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:10.269597 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:10.269601 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:10.269604 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:10.269608 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:10.269612 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:10.269616 | orchestrator | 2026-02-19 02:57:10.269619 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-19 02:57:10.269623 | orchestrator | Thursday 19 February 2026 02:56:48 +0000 (0:00:01.133) 0:06:55.442 ***** 2026-02-19 02:57:10.269627 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:57:10.269630 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:10.269634 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:10.269638 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:10.269642 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:10.269645 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:10.269649 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:10.269653 | orchestrator | 2026-02-19 02:57:10.269657 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-19 02:57:10.269660 | orchestrator | Thursday 19 February 2026 02:56:50 +0000 (0:00:02.242) 0:06:57.685 ***** 2026-02-19 02:57:10.269679 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:57:10.269689 | orchestrator | 2026-02-19 02:57:10.269693 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-19 02:57:10.269697 | orchestrator | Thursday 19 February 2026 02:56:50 +0000 (0:00:00.096) 0:06:57.781 ***** 2026-02-19 02:57:10.269701 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:10.269704 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:10.269708 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:10.269712 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:10.269716 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:10.269719 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:10.269723 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:10.269727 | orchestrator | 2026-02-19 02:57:10.269730 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-19 02:57:10.269736 | orchestrator | Thursday 19 February 2026 02:56:51 +0000 (0:00:00.998) 0:06:58.780 ***** 2026-02-19 02:57:10.269739 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:57:10.269743 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:57:10.269747 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:57:10.269750 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:57:10.269754 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:57:10.269758 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:57:10.269761 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:57:10.269765 | orchestrator | 2026-02-19 02:57:10.269769 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-19 02:57:10.269772 | orchestrator | Thursday 19 February 2026 02:56:51 +0000 (0:00:00.514) 0:06:59.295 ***** 2026-02-19 02:57:10.269777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:57:10.269782 | orchestrator | 2026-02-19 02:57:10.269786 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-19 02:57:10.269790 | orchestrator | Thursday 19 February 2026 02:56:52 +0000 (0:00:01.046) 0:07:00.342 ***** 2026-02-19 02:57:10.269794 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:10.269797 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:10.269801 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:10.269805 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:10.269808 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:10.269812 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:10.269816 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:10.269819 | orchestrator | 2026-02-19 02:57:10.269823 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-19 02:57:10.269827 | orchestrator | Thursday 19 February 2026 02:56:53 +0000 (0:00:00.842) 0:07:01.184 ***** 2026-02-19 02:57:10.269831 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-19 02:57:10.269849 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-19 02:57:10.269853 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-19 02:57:10.269857 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-19 02:57:10.269861 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-19 02:57:10.269864 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-19 02:57:10.269868 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-19 02:57:10.269872 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-19 02:57:10.269876 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-19 02:57:10.269879 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-19 02:57:10.269883 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-19 02:57:10.269887 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-19 02:57:10.269894 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-19 02:57:10.269898 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-19 02:57:10.269902 | orchestrator | 2026-02-19 02:57:10.269906 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-19 02:57:10.269909 | orchestrator | Thursday 19 February 2026 02:56:56 +0000 (0:00:02.406) 0:07:03.590 ***** 2026-02-19 02:57:10.269913 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:57:10.269917 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:57:10.269921 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:57:10.269924 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:57:10.269928 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:57:10.269932 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:57:10.269970 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:57:10.269975 | orchestrator | 2026-02-19 02:57:10.269979 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-19 02:57:10.269983 | orchestrator | Thursday 19 February 2026 02:56:56 +0000 (0:00:00.646) 0:07:04.237 ***** 2026-02-19 02:57:10.269988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:57:10.269994 | orchestrator | 2026-02-19 02:57:10.269997 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-19 02:57:10.270001 | orchestrator | Thursday 19 February 2026 02:56:57 +0000 (0:00:00.755) 0:07:04.992 ***** 2026-02-19 02:57:10.270005 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:10.270008 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:10.270033 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:10.270038 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:10.270042 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:10.270046 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:10.270049 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:10.270053 | orchestrator | 2026-02-19 02:57:10.270057 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-19 02:57:10.270061 | orchestrator | Thursday 19 February 2026 02:56:58 +0000 (0:00:00.822) 0:07:05.815 ***** 2026-02-19 02:57:10.270067 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:10.270071 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:10.270075 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:10.270079 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:10.270083 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:10.270086 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:10.270090 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:10.270094 | orchestrator | 2026-02-19 02:57:10.270098 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-19 02:57:10.270101 | orchestrator | Thursday 19 February 2026 02:56:59 +0000 (0:00:01.020) 0:07:06.836 ***** 2026-02-19 02:57:10.270105 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:57:10.270109 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:57:10.270113 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:57:10.270117 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:57:10.270120 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:57:10.270124 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:57:10.270128 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:57:10.270132 | orchestrator | 2026-02-19 02:57:10.270135 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-19 02:57:10.270139 | orchestrator | Thursday 19 February 2026 02:56:59 +0000 (0:00:00.457) 0:07:07.293 ***** 2026-02-19 02:57:10.270143 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:10.270147 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:10.270152 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:10.270158 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:10.270164 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:10.270171 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:10.270175 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:10.270179 | orchestrator | 2026-02-19 02:57:10.270182 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-19 02:57:10.270186 | orchestrator | Thursday 19 February 2026 02:57:01 +0000 (0:00:01.514) 0:07:08.807 ***** 2026-02-19 02:57:10.270190 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:57:10.270194 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:57:10.270197 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:57:10.270201 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:57:10.270205 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:57:10.270209 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:57:10.270212 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:57:10.270216 | orchestrator | 2026-02-19 02:57:10.270220 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-19 02:57:10.270223 | orchestrator | Thursday 19 February 2026 02:57:01 +0000 (0:00:00.465) 0:07:09.272 ***** 2026-02-19 02:57:10.270227 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:10.270231 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:10.270235 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:10.270238 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:10.270242 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:10.270246 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:10.270253 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:42.560230 | orchestrator | 2026-02-19 02:57:42.560360 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-19 02:57:42.560373 | orchestrator | Thursday 19 February 2026 02:57:10 +0000 (0:00:08.374) 0:07:17.647 ***** 2026-02-19 02:57:42.560381 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.560388 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:42.560395 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:42.560402 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:42.560408 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:42.560414 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:42.560421 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:42.560427 | orchestrator | 2026-02-19 02:57:42.560434 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-19 02:57:42.560440 | orchestrator | Thursday 19 February 2026 02:57:11 +0000 (0:00:01.543) 0:07:19.190 ***** 2026-02-19 02:57:42.560446 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.560453 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:42.560459 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:42.560465 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:42.560471 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:42.560477 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:42.560483 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:42.560489 | orchestrator | 2026-02-19 02:57:42.560495 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-19 02:57:42.560501 | orchestrator | Thursday 19 February 2026 02:57:13 +0000 (0:00:01.857) 0:07:21.048 ***** 2026-02-19 02:57:42.560507 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.560514 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:42.560520 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:42.560526 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:42.560532 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:42.560538 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:42.560544 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:42.560550 | orchestrator | 2026-02-19 02:57:42.560556 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-19 02:57:42.560562 | orchestrator | Thursday 19 February 2026 02:57:15 +0000 (0:00:01.625) 0:07:22.673 ***** 2026-02-19 02:57:42.560568 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.560575 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:42.560581 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:42.560608 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:42.560614 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:42.560620 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:42.560626 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:42.560632 | orchestrator | 2026-02-19 02:57:42.560638 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-19 02:57:42.560645 | orchestrator | Thursday 19 February 2026 02:57:16 +0000 (0:00:00.850) 0:07:23.524 ***** 2026-02-19 02:57:42.560651 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:57:42.560657 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:57:42.560663 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:57:42.560669 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:57:42.560675 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:57:42.560681 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:57:42.560687 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:57:42.560694 | orchestrator | 2026-02-19 02:57:42.560700 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-19 02:57:42.560706 | orchestrator | Thursday 19 February 2026 02:57:17 +0000 (0:00:00.971) 0:07:24.496 ***** 2026-02-19 02:57:42.560712 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:57:42.560718 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:57:42.560724 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:57:42.560730 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:57:42.560736 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:57:42.560742 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:57:42.560748 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:57:42.560754 | orchestrator | 2026-02-19 02:57:42.560760 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-19 02:57:42.560766 | orchestrator | Thursday 19 February 2026 02:57:17 +0000 (0:00:00.504) 0:07:25.000 ***** 2026-02-19 02:57:42.560772 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.560798 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:42.560809 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:42.560819 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:42.560829 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:42.560839 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:42.560848 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:42.560858 | orchestrator | 2026-02-19 02:57:42.560868 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-19 02:57:42.560878 | orchestrator | Thursday 19 February 2026 02:57:18 +0000 (0:00:00.538) 0:07:25.539 ***** 2026-02-19 02:57:42.560912 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.560924 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:42.560935 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:42.560946 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:42.560956 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:42.560967 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:42.560976 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:42.560983 | orchestrator | 2026-02-19 02:57:42.560990 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-19 02:57:42.560998 | orchestrator | Thursday 19 February 2026 02:57:18 +0000 (0:00:00.682) 0:07:26.222 ***** 2026-02-19 02:57:42.561004 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.561011 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:42.561018 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:42.561025 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:42.561032 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:42.561039 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:42.561046 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:42.561053 | orchestrator | 2026-02-19 02:57:42.561060 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-19 02:57:42.561067 | orchestrator | Thursday 19 February 2026 02:57:19 +0000 (0:00:00.550) 0:07:26.773 ***** 2026-02-19 02:57:42.561074 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:42.561081 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:42.561095 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:42.561102 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.561109 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:42.561116 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:42.561123 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:42.561131 | orchestrator | 2026-02-19 02:57:42.561156 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-19 02:57:42.561164 | orchestrator | Thursday 19 February 2026 02:57:23 +0000 (0:00:04.464) 0:07:31.237 ***** 2026-02-19 02:57:42.561172 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:57:42.561178 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:57:42.561184 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:57:42.561190 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:57:42.561196 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:57:42.561202 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:57:42.561209 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:57:42.561215 | orchestrator | 2026-02-19 02:57:42.561221 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-19 02:57:42.561227 | orchestrator | Thursday 19 February 2026 02:57:24 +0000 (0:00:00.560) 0:07:31.798 ***** 2026-02-19 02:57:42.561235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:57:42.561244 | orchestrator | 2026-02-19 02:57:42.561250 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-19 02:57:42.561256 | orchestrator | Thursday 19 February 2026 02:57:25 +0000 (0:00:00.982) 0:07:32.780 ***** 2026-02-19 02:57:42.561263 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.561269 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:42.561275 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:42.561281 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:42.561287 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:42.561293 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:42.561299 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:42.561309 | orchestrator | 2026-02-19 02:57:42.561319 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-19 02:57:42.561330 | orchestrator | Thursday 19 February 2026 02:57:27 +0000 (0:00:02.045) 0:07:34.826 ***** 2026-02-19 02:57:42.561339 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.561350 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:42.561359 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:42.561369 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:42.561378 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:42.561387 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:42.561396 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:42.561407 | orchestrator | 2026-02-19 02:57:42.561417 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-19 02:57:42.561428 | orchestrator | Thursday 19 February 2026 02:57:28 +0000 (0:00:01.284) 0:07:36.110 ***** 2026-02-19 02:57:42.561438 | orchestrator | ok: [testbed-manager] 2026-02-19 02:57:42.561448 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:57:42.561459 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:57:42.561465 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:57:42.561471 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:57:42.561478 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:57:42.561484 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:57:42.561490 | orchestrator | 2026-02-19 02:57:42.561496 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-19 02:57:42.561502 | orchestrator | Thursday 19 February 2026 02:57:29 +0000 (0:00:00.781) 0:07:36.891 ***** 2026-02-19 02:57:42.561515 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-19 02:57:42.561523 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-19 02:57:42.561535 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-19 02:57:42.561542 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-19 02:57:42.561548 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-19 02:57:42.561554 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-19 02:57:42.561560 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-19 02:57:42.561566 | orchestrator | 2026-02-19 02:57:42.561573 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-19 02:57:42.561579 | orchestrator | Thursday 19 February 2026 02:57:31 +0000 (0:00:01.687) 0:07:38.578 ***** 2026-02-19 02:57:42.561585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:57:42.561592 | orchestrator | 2026-02-19 02:57:42.561598 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-19 02:57:42.561604 | orchestrator | Thursday 19 February 2026 02:57:31 +0000 (0:00:00.652) 0:07:39.231 ***** 2026-02-19 02:57:42.561610 | orchestrator | changed: [testbed-manager] 2026-02-19 02:57:42.561616 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:57:42.561623 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:57:42.561629 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:57:42.561635 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:57:42.561644 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:57:42.561655 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:57:42.561665 | orchestrator | 2026-02-19 02:57:42.561682 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-19 02:58:14.656217 | orchestrator | Thursday 19 February 2026 02:57:42 +0000 (0:00:10.704) 0:07:49.936 ***** 2026-02-19 02:58:14.656340 | orchestrator | ok: [testbed-manager] 2026-02-19 02:58:14.656367 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:58:14.656382 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:58:14.656397 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:58:14.656410 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:58:14.656420 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:58:14.656428 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:58:14.656437 | orchestrator | 2026-02-19 02:58:14.656447 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-19 02:58:14.656457 | orchestrator | Thursday 19 February 2026 02:57:44 +0000 (0:00:01.922) 0:07:51.858 ***** 2026-02-19 02:58:14.656466 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:58:14.656474 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:58:14.656483 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:58:14.656492 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:58:14.656500 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:58:14.656509 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:58:14.656517 | orchestrator | 2026-02-19 02:58:14.656527 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-19 02:58:14.656536 | orchestrator | Thursday 19 February 2026 02:57:45 +0000 (0:00:01.283) 0:07:53.142 ***** 2026-02-19 02:58:14.656545 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.656564 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.656585 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.656600 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.656615 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.656673 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.656688 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.656702 | orchestrator | 2026-02-19 02:58:14.656717 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-19 02:58:14.656732 | orchestrator | 2026-02-19 02:58:14.656746 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-19 02:58:14.656761 | orchestrator | Thursday 19 February 2026 02:57:46 +0000 (0:00:01.223) 0:07:54.365 ***** 2026-02-19 02:58:14.656776 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:58:14.656792 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:58:14.656807 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:58:14.656822 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:58:14.656950 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:58:14.656973 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:58:14.656989 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:58:14.657005 | orchestrator | 2026-02-19 02:58:14.657022 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-19 02:58:14.657039 | orchestrator | 2026-02-19 02:58:14.657057 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-19 02:58:14.657074 | orchestrator | Thursday 19 February 2026 02:57:47 +0000 (0:00:00.661) 0:07:55.027 ***** 2026-02-19 02:58:14.657090 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.657107 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.657121 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.657136 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.657151 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.657167 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.657183 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.657200 | orchestrator | 2026-02-19 02:58:14.657217 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-19 02:58:14.657252 | orchestrator | Thursday 19 February 2026 02:57:48 +0000 (0:00:01.349) 0:07:56.376 ***** 2026-02-19 02:58:14.657271 | orchestrator | ok: [testbed-manager] 2026-02-19 02:58:14.657288 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:58:14.657304 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:58:14.657320 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:58:14.657337 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:58:14.657353 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:58:14.657368 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:58:14.657385 | orchestrator | 2026-02-19 02:58:14.657399 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-19 02:58:14.657413 | orchestrator | Thursday 19 February 2026 02:57:50 +0000 (0:00:01.397) 0:07:57.773 ***** 2026-02-19 02:58:14.657428 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:58:14.657445 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:58:14.657461 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:58:14.657477 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:58:14.657493 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:58:14.657508 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:58:14.657523 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:58:14.657538 | orchestrator | 2026-02-19 02:58:14.657554 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-19 02:58:14.657568 | orchestrator | Thursday 19 February 2026 02:57:50 +0000 (0:00:00.461) 0:07:58.235 ***** 2026-02-19 02:58:14.657584 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:58:14.657599 | orchestrator | 2026-02-19 02:58:14.657614 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-19 02:58:14.657628 | orchestrator | Thursday 19 February 2026 02:57:51 +0000 (0:00:00.906) 0:07:59.142 ***** 2026-02-19 02:58:14.657643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:58:14.657678 | orchestrator | 2026-02-19 02:58:14.657695 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-19 02:58:14.657710 | orchestrator | Thursday 19 February 2026 02:57:52 +0000 (0:00:00.747) 0:07:59.889 ***** 2026-02-19 02:58:14.657725 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.657740 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.657756 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.657772 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.657786 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.657801 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.657817 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.657854 | orchestrator | 2026-02-19 02:58:14.657898 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-19 02:58:14.657914 | orchestrator | Thursday 19 February 2026 02:58:03 +0000 (0:00:10.687) 0:08:10.577 ***** 2026-02-19 02:58:14.657928 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.657942 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.657958 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.657973 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.657989 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.658005 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.658145 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.658172 | orchestrator | 2026-02-19 02:58:14.658188 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-19 02:58:14.658202 | orchestrator | Thursday 19 February 2026 02:58:04 +0000 (0:00:00.839) 0:08:11.417 ***** 2026-02-19 02:58:14.658216 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.658228 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.658243 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.658266 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.658283 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.658296 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.658309 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.658323 | orchestrator | 2026-02-19 02:58:14.658339 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-19 02:58:14.658354 | orchestrator | Thursday 19 February 2026 02:58:05 +0000 (0:00:01.326) 0:08:12.743 ***** 2026-02-19 02:58:14.658369 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.658384 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.658399 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.658415 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.658430 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.658444 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.658453 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.658462 | orchestrator | 2026-02-19 02:58:14.658471 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-19 02:58:14.658479 | orchestrator | Thursday 19 February 2026 02:58:07 +0000 (0:00:01.940) 0:08:14.683 ***** 2026-02-19 02:58:14.658488 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.658496 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.658505 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.658513 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.658523 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.658531 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.658540 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.658548 | orchestrator | 2026-02-19 02:58:14.658557 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-19 02:58:14.658566 | orchestrator | Thursday 19 February 2026 02:58:08 +0000 (0:00:01.311) 0:08:15.995 ***** 2026-02-19 02:58:14.658574 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.658599 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.658629 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.658638 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.658646 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.658655 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.658664 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.658673 | orchestrator | 2026-02-19 02:58:14.658681 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-19 02:58:14.658690 | orchestrator | 2026-02-19 02:58:14.658708 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-19 02:58:14.658717 | orchestrator | Thursday 19 February 2026 02:58:09 +0000 (0:00:01.151) 0:08:17.146 ***** 2026-02-19 02:58:14.658727 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:58:14.658736 | orchestrator | 2026-02-19 02:58:14.658745 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-19 02:58:14.658754 | orchestrator | Thursday 19 February 2026 02:58:10 +0000 (0:00:00.836) 0:08:17.982 ***** 2026-02-19 02:58:14.658764 | orchestrator | ok: [testbed-manager] 2026-02-19 02:58:14.658775 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:58:14.658785 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:58:14.658795 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:58:14.658804 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:58:14.658815 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:58:14.658824 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:58:14.658863 | orchestrator | 2026-02-19 02:58:14.658878 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-19 02:58:14.658888 | orchestrator | Thursday 19 February 2026 02:58:11 +0000 (0:00:01.082) 0:08:19.064 ***** 2026-02-19 02:58:14.658898 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:14.658910 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:14.658920 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:14.658930 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:14.658940 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:14.658951 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:14.658961 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:14.658971 | orchestrator | 2026-02-19 02:58:14.658981 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-19 02:58:14.658990 | orchestrator | Thursday 19 February 2026 02:58:12 +0000 (0:00:01.146) 0:08:20.211 ***** 2026-02-19 02:58:14.659001 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 02:58:14.659011 | orchestrator | 2026-02-19 02:58:14.659021 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-19 02:58:14.659031 | orchestrator | Thursday 19 February 2026 02:58:13 +0000 (0:00:00.979) 0:08:21.191 ***** 2026-02-19 02:58:14.659041 | orchestrator | ok: [testbed-manager] 2026-02-19 02:58:14.659051 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:58:14.659061 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:58:14.659071 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:58:14.659080 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:58:14.659090 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:58:14.659100 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:58:14.659110 | orchestrator | 2026-02-19 02:58:14.659134 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-19 02:58:16.200344 | orchestrator | Thursday 19 February 2026 02:58:14 +0000 (0:00:00.840) 0:08:22.031 ***** 2026-02-19 02:58:16.200450 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:16.200464 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:16.200471 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:16.200477 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:16.200484 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:16.200490 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:16.200496 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:16.200528 | orchestrator | 2026-02-19 02:58:16.200537 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:58:16.200545 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-19 02:58:16.200553 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-19 02:58:16.200559 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-19 02:58:16.200566 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-19 02:58:16.200572 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-19 02:58:16.200579 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-19 02:58:16.200585 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-19 02:58:16.200591 | orchestrator | 2026-02-19 02:58:16.200597 | orchestrator | 2026-02-19 02:58:16.200604 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:58:16.200610 | orchestrator | Thursday 19 February 2026 02:58:15 +0000 (0:00:01.137) 0:08:23.169 ***** 2026-02-19 02:58:16.200617 | orchestrator | =============================================================================== 2026-02-19 02:58:16.200623 | orchestrator | osism.commons.packages : Install required packages --------------------- 84.46s 2026-02-19 02:58:16.200629 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.29s 2026-02-19 02:58:16.200635 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.88s 2026-02-19 02:58:16.200641 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.84s 2026-02-19 02:58:16.200648 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.18s 2026-02-19 02:58:16.200667 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.16s 2026-02-19 02:58:16.200674 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.35s 2026-02-19 02:58:16.200680 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.70s 2026-02-19 02:58:16.200687 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.69s 2026-02-19 02:58:16.200693 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.54s 2026-02-19 02:58:16.200699 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.11s 2026-02-19 02:58:16.200706 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.21s 2026-02-19 02:58:16.200712 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.04s 2026-02-19 02:58:16.200718 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.86s 2026-02-19 02:58:16.200724 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.41s 2026-02-19 02:58:16.200729 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.37s 2026-02-19 02:58:16.200735 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.21s 2026-02-19 02:58:16.200740 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.20s 2026-02-19 02:58:16.200746 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.94s 2026-02-19 02:58:16.200752 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.05s 2026-02-19 02:58:16.460651 | orchestrator | + osism apply fail2ban 2026-02-19 02:58:28.956667 | orchestrator | 2026-02-19 02:58:28 | INFO  | Task 33f1267c-7cde-40c5-81c6-ab42ab78f0d5 (fail2ban) was prepared for execution. 2026-02-19 02:58:28.956774 | orchestrator | 2026-02-19 02:58:28 | INFO  | It takes a moment until task 33f1267c-7cde-40c5-81c6-ab42ab78f0d5 (fail2ban) has been started and output is visible here. 2026-02-19 02:58:51.910825 | orchestrator | 2026-02-19 02:58:51.910928 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-19 02:58:51.910940 | orchestrator | 2026-02-19 02:58:51.910950 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-19 02:58:51.910966 | orchestrator | Thursday 19 February 2026 02:58:33 +0000 (0:00:00.254) 0:00:00.254 ***** 2026-02-19 02:58:51.910981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 02:58:51.910997 | orchestrator | 2026-02-19 02:58:51.911011 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-19 02:58:51.911027 | orchestrator | Thursday 19 February 2026 02:58:34 +0000 (0:00:01.090) 0:00:01.345 ***** 2026-02-19 02:58:51.911043 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:51.911059 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:51.911075 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:51.911086 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:51.911094 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:51.911103 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:51.911112 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:51.911121 | orchestrator | 2026-02-19 02:58:51.911130 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-19 02:58:51.911139 | orchestrator | Thursday 19 February 2026 02:58:47 +0000 (0:00:12.768) 0:00:14.113 ***** 2026-02-19 02:58:51.911148 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:51.911157 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:51.911166 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:51.911174 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:51.911183 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:51.911191 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:51.911200 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:51.911209 | orchestrator | 2026-02-19 02:58:51.911217 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-19 02:58:51.911226 | orchestrator | Thursday 19 February 2026 02:58:48 +0000 (0:00:01.524) 0:00:15.638 ***** 2026-02-19 02:58:51.911235 | orchestrator | ok: [testbed-manager] 2026-02-19 02:58:51.911245 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:58:51.911254 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:58:51.911262 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:58:51.911271 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:58:51.911279 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:58:51.911288 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:58:51.911297 | orchestrator | 2026-02-19 02:58:51.911306 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-19 02:58:51.911314 | orchestrator | Thursday 19 February 2026 02:58:50 +0000 (0:00:01.441) 0:00:17.079 ***** 2026-02-19 02:58:51.911323 | orchestrator | changed: [testbed-manager] 2026-02-19 02:58:51.911334 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:58:51.911344 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:58:51.911353 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:58:51.911363 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:58:51.911373 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:58:51.911383 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:58:51.911393 | orchestrator | 2026-02-19 02:58:51.911403 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:58:51.911414 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:58:51.911450 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:58:51.911461 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:58:51.911472 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:58:51.911482 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:58:51.911492 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:58:51.911501 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 02:58:51.911511 | orchestrator | 2026-02-19 02:58:51.911521 | orchestrator | 2026-02-19 02:58:51.911531 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:58:51.911542 | orchestrator | Thursday 19 February 2026 02:58:51 +0000 (0:00:01.470) 0:00:18.549 ***** 2026-02-19 02:58:51.911552 | orchestrator | =============================================================================== 2026-02-19 02:58:51.911561 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.77s 2026-02-19 02:58:51.911571 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-02-19 02:58:51.911581 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.47s 2026-02-19 02:58:51.911591 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.44s 2026-02-19 02:58:51.911602 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.09s 2026-02-19 02:58:52.100295 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-19 02:58:52.100425 | orchestrator | + osism apply network 2026-02-19 02:59:04.007626 | orchestrator | 2026-02-19 02:59:04 | INFO  | Task a53c5305-41b8-4c53-94ac-24d05a356e6c (network) was prepared for execution. 2026-02-19 02:59:04.007746 | orchestrator | 2026-02-19 02:59:04 | INFO  | It takes a moment until task a53c5305-41b8-4c53-94ac-24d05a356e6c (network) has been started and output is visible here. 2026-02-19 02:59:33.971720 | orchestrator | 2026-02-19 02:59:33.971866 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-19 02:59:33.971885 | orchestrator | 2026-02-19 02:59:33.971899 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-19 02:59:33.971912 | orchestrator | Thursday 19 February 2026 02:59:08 +0000 (0:00:00.276) 0:00:00.276 ***** 2026-02-19 02:59:33.971965 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:33.971998 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:59:33.972012 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:59:33.972025 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:59:33.972037 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:59:33.972045 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:59:33.972053 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:59:33.972060 | orchestrator | 2026-02-19 02:59:33.972069 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-19 02:59:33.972077 | orchestrator | Thursday 19 February 2026 02:59:08 +0000 (0:00:00.711) 0:00:00.987 ***** 2026-02-19 02:59:33.972086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 02:59:33.972096 | orchestrator | 2026-02-19 02:59:33.972104 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-19 02:59:33.972135 | orchestrator | Thursday 19 February 2026 02:59:10 +0000 (0:00:01.225) 0:00:02.213 ***** 2026-02-19 02:59:33.972143 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:33.972151 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:59:33.972159 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:59:33.972166 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:59:33.972174 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:59:33.972182 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:59:33.972189 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:59:33.972197 | orchestrator | 2026-02-19 02:59:33.972205 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-19 02:59:33.972213 | orchestrator | Thursday 19 February 2026 02:59:12 +0000 (0:00:02.428) 0:00:04.642 ***** 2026-02-19 02:59:33.972221 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:33.972229 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:59:33.972238 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:59:33.972247 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:59:33.972256 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:59:33.972265 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:59:33.972274 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:59:33.972283 | orchestrator | 2026-02-19 02:59:33.972292 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-19 02:59:33.972301 | orchestrator | Thursday 19 February 2026 02:59:14 +0000 (0:00:01.929) 0:00:06.571 ***** 2026-02-19 02:59:33.972311 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-19 02:59:33.972321 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-19 02:59:33.972330 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-19 02:59:33.972339 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-19 02:59:33.972348 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-19 02:59:33.972369 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-19 02:59:33.972379 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-19 02:59:33.972387 | orchestrator | 2026-02-19 02:59:33.972436 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-19 02:59:33.972452 | orchestrator | Thursday 19 February 2026 02:59:15 +0000 (0:00:01.004) 0:00:07.576 ***** 2026-02-19 02:59:33.972461 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 02:59:33.972471 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 02:59:33.972480 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 02:59:33.972489 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 02:59:33.972498 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 02:59:33.972508 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 02:59:33.972517 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 02:59:33.972525 | orchestrator | 2026-02-19 02:59:33.972534 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-19 02:59:33.972543 | orchestrator | Thursday 19 February 2026 02:59:18 +0000 (0:00:03.251) 0:00:10.828 ***** 2026-02-19 02:59:33.972552 | orchestrator | changed: [testbed-manager] 2026-02-19 02:59:33.972562 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:59:33.972570 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:59:33.972579 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:59:33.972589 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:59:33.972598 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:59:33.972607 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:59:33.972616 | orchestrator | 2026-02-19 02:59:33.972648 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-19 02:59:33.972656 | orchestrator | Thursday 19 February 2026 02:59:20 +0000 (0:00:01.541) 0:00:12.369 ***** 2026-02-19 02:59:33.972664 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 02:59:33.972671 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 02:59:33.972679 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 02:59:33.972687 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 02:59:33.972702 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 02:59:33.972710 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 02:59:33.972717 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 02:59:33.972725 | orchestrator | 2026-02-19 02:59:33.972752 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-19 02:59:33.972762 | orchestrator | Thursday 19 February 2026 02:59:22 +0000 (0:00:01.671) 0:00:14.040 ***** 2026-02-19 02:59:33.972770 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:33.972778 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:59:33.972785 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:59:33.972793 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:59:33.972801 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:59:33.972809 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:59:33.972816 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:59:33.972824 | orchestrator | 2026-02-19 02:59:33.972832 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-19 02:59:33.972857 | orchestrator | Thursday 19 February 2026 02:59:23 +0000 (0:00:01.111) 0:00:15.152 ***** 2026-02-19 02:59:33.972865 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:59:33.972873 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:59:33.972881 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:59:33.972888 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:59:33.972896 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:59:33.972904 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:59:33.972911 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:59:33.972919 | orchestrator | 2026-02-19 02:59:33.972927 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-19 02:59:33.972935 | orchestrator | Thursday 19 February 2026 02:59:23 +0000 (0:00:00.652) 0:00:15.805 ***** 2026-02-19 02:59:33.972942 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:33.972950 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:59:33.972958 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:59:33.972965 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:59:33.972973 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:59:33.972980 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:59:33.972988 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:59:33.972996 | orchestrator | 2026-02-19 02:59:33.973003 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-19 02:59:33.973011 | orchestrator | Thursday 19 February 2026 02:59:26 +0000 (0:00:02.636) 0:00:18.441 ***** 2026-02-19 02:59:33.973019 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:59:33.973027 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:59:33.973034 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:59:33.973042 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:59:33.973071 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:59:33.973080 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:59:33.973089 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-19 02:59:33.973098 | orchestrator | 2026-02-19 02:59:33.973106 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-19 02:59:33.973113 | orchestrator | Thursday 19 February 2026 02:59:27 +0000 (0:00:00.824) 0:00:19.266 ***** 2026-02-19 02:59:33.973121 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:33.973129 | orchestrator | changed: [testbed-node-1] 2026-02-19 02:59:33.973137 | orchestrator | changed: [testbed-node-0] 2026-02-19 02:59:33.973144 | orchestrator | changed: [testbed-node-2] 2026-02-19 02:59:33.973152 | orchestrator | changed: [testbed-node-3] 2026-02-19 02:59:33.973160 | orchestrator | changed: [testbed-node-4] 2026-02-19 02:59:33.973168 | orchestrator | changed: [testbed-node-5] 2026-02-19 02:59:33.973175 | orchestrator | 2026-02-19 02:59:33.973183 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-19 02:59:33.973191 | orchestrator | Thursday 19 February 2026 02:59:29 +0000 (0:00:01.731) 0:00:20.997 ***** 2026-02-19 02:59:33.973199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 02:59:33.973215 | orchestrator | 2026-02-19 02:59:33.973223 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-19 02:59:33.973231 | orchestrator | Thursday 19 February 2026 02:59:30 +0000 (0:00:01.227) 0:00:22.224 ***** 2026-02-19 02:59:33.973238 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:33.973246 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:59:33.973254 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:59:33.973262 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:59:33.973274 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:59:33.973282 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:59:33.973305 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:59:33.973319 | orchestrator | 2026-02-19 02:59:33.973346 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-19 02:59:33.973360 | orchestrator | Thursday 19 February 2026 02:59:31 +0000 (0:00:01.751) 0:00:23.975 ***** 2026-02-19 02:59:33.973390 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:33.973403 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:59:33.973414 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:59:33.973427 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:59:33.973440 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:59:33.973453 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:59:33.973484 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:59:33.973492 | orchestrator | 2026-02-19 02:59:33.973500 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-19 02:59:33.973508 | orchestrator | Thursday 19 February 2026 02:59:32 +0000 (0:00:00.641) 0:00:24.617 ***** 2026-02-19 02:59:33.973516 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-19 02:59:33.973524 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-19 02:59:33.973532 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-19 02:59:33.973540 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-19 02:59:33.973548 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-19 02:59:33.973556 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-19 02:59:33.973564 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-19 02:59:33.973572 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-19 02:59:33.973580 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-19 02:59:33.973588 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-19 02:59:33.973596 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-19 02:59:33.973603 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-19 02:59:33.973611 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-19 02:59:33.973619 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-19 02:59:33.973627 | orchestrator | 2026-02-19 02:59:33.973642 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-19 02:59:50.434599 | orchestrator | Thursday 19 February 2026 02:59:33 +0000 (0:00:01.322) 0:00:25.940 ***** 2026-02-19 02:59:50.434797 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:59:50.434824 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:59:50.434834 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:59:50.434843 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:59:50.434852 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:59:50.434861 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:59:50.434870 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:59:50.434879 | orchestrator | 2026-02-19 02:59:50.434910 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-19 02:59:50.434920 | orchestrator | Thursday 19 February 2026 02:59:34 +0000 (0:00:00.636) 0:00:26.576 ***** 2026-02-19 02:59:50.434931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-manager, testbed-node-1, testbed-node-5, testbed-node-0, testbed-node-4, testbed-node-3 2026-02-19 02:59:50.434943 | orchestrator | 2026-02-19 02:59:50.434952 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-19 02:59:50.434961 | orchestrator | Thursday 19 February 2026 02:59:38 +0000 (0:00:04.355) 0:00:30.931 ***** 2026-02-19 02:59:50.434971 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.434985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435108 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435273 | orchestrator | 2026-02-19 02:59:50.435289 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-19 02:59:50.435305 | orchestrator | Thursday 19 February 2026 02:59:44 +0000 (0:00:05.474) 0:00:36.405 ***** 2026-02-19 02:59:50.435320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435335 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435447 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-19 02:59:50.435468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435478 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:50.435514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:56.443559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-19 02:59:56.443672 | orchestrator | 2026-02-19 02:59:56.443689 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-19 02:59:56.443703 | orchestrator | Thursday 19 February 2026 02:59:50 +0000 (0:00:05.987) 0:00:42.392 ***** 2026-02-19 02:59:56.443744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 02:59:56.443757 | orchestrator | 2026-02-19 02:59:56.443770 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-19 02:59:56.443781 | orchestrator | Thursday 19 February 2026 02:59:51 +0000 (0:00:01.230) 0:00:43.623 ***** 2026-02-19 02:59:56.443793 | orchestrator | ok: [testbed-manager] 2026-02-19 02:59:56.443805 | orchestrator | ok: [testbed-node-0] 2026-02-19 02:59:56.443816 | orchestrator | ok: [testbed-node-1] 2026-02-19 02:59:56.443826 | orchestrator | ok: [testbed-node-2] 2026-02-19 02:59:56.443837 | orchestrator | ok: [testbed-node-3] 2026-02-19 02:59:56.443848 | orchestrator | ok: [testbed-node-4] 2026-02-19 02:59:56.443859 | orchestrator | ok: [testbed-node-5] 2026-02-19 02:59:56.443870 | orchestrator | 2026-02-19 02:59:56.443881 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-19 02:59:56.443902 | orchestrator | Thursday 19 February 2026 02:59:52 +0000 (0:00:01.152) 0:00:44.775 ***** 2026-02-19 02:59:56.443914 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-19 02:59:56.443926 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-19 02:59:56.443937 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-19 02:59:56.443948 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-19 02:59:56.443958 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-19 02:59:56.443969 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-19 02:59:56.443980 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-19 02:59:56.443990 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-19 02:59:56.444001 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:59:56.444013 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-19 02:59:56.444024 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-19 02:59:56.444051 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-19 02:59:56.444063 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-19 02:59:56.444073 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:59:56.444110 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-19 02:59:56.444124 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-19 02:59:56.444136 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-19 02:59:56.444148 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-19 02:59:56.444160 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:59:56.444173 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-19 02:59:56.444185 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-19 02:59:56.444198 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-19 02:59:56.444209 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-19 02:59:56.444222 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:59:56.444233 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-19 02:59:56.444245 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-19 02:59:56.444257 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-19 02:59:56.444269 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-19 02:59:56.444281 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:59:56.444293 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:59:56.444305 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-19 02:59:56.444317 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-19 02:59:56.444329 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-19 02:59:56.444341 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-19 02:59:56.444352 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:59:56.444365 | orchestrator | 2026-02-19 02:59:56.444377 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-19 02:59:56.444407 | orchestrator | Thursday 19 February 2026 02:59:54 +0000 (0:00:01.992) 0:00:46.768 ***** 2026-02-19 02:59:56.444420 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:59:56.444432 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:59:56.444444 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:59:56.444457 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:59:56.444468 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:59:56.444479 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:59:56.444490 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:59:56.444501 | orchestrator | 2026-02-19 02:59:56.444511 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-19 02:59:56.444530 | orchestrator | Thursday 19 February 2026 02:59:55 +0000 (0:00:00.603) 0:00:47.372 ***** 2026-02-19 02:59:56.444548 | orchestrator | skipping: [testbed-manager] 2026-02-19 02:59:56.444564 | orchestrator | skipping: [testbed-node-0] 2026-02-19 02:59:56.444589 | orchestrator | skipping: [testbed-node-1] 2026-02-19 02:59:56.444613 | orchestrator | skipping: [testbed-node-2] 2026-02-19 02:59:56.444632 | orchestrator | skipping: [testbed-node-3] 2026-02-19 02:59:56.444649 | orchestrator | skipping: [testbed-node-4] 2026-02-19 02:59:56.444666 | orchestrator | skipping: [testbed-node-5] 2026-02-19 02:59:56.444684 | orchestrator | 2026-02-19 02:59:56.444701 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 02:59:56.444779 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 02:59:56.444799 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 02:59:56.444846 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 02:59:56.444866 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 02:59:56.444884 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 02:59:56.444900 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 02:59:56.444917 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 02:59:56.444934 | orchestrator | 2026-02-19 02:59:56.444952 | orchestrator | 2026-02-19 02:59:56.444969 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 02:59:56.444987 | orchestrator | Thursday 19 February 2026 02:59:56 +0000 (0:00:00.694) 0:00:48.066 ***** 2026-02-19 02:59:56.445016 | orchestrator | =============================================================================== 2026-02-19 02:59:56.445036 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.99s 2026-02-19 02:59:56.445055 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.47s 2026-02-19 02:59:56.445073 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.36s 2026-02-19 02:59:56.445089 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.25s 2026-02-19 02:59:56.445104 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.64s 2026-02-19 02:59:56.445123 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.43s 2026-02-19 02:59:56.445142 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.99s 2026-02-19 02:59:56.445160 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.93s 2026-02-19 02:59:56.445173 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.75s 2026-02-19 02:59:56.445184 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.73s 2026-02-19 02:59:56.445195 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.67s 2026-02-19 02:59:56.445205 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.54s 2026-02-19 02:59:56.445216 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.32s 2026-02-19 02:59:56.445226 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.23s 2026-02-19 02:59:56.445237 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2026-02-19 02:59:56.445248 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.23s 2026-02-19 02:59:56.445258 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2026-02-19 02:59:56.445269 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2026-02-19 02:59:56.445280 | orchestrator | osism.commons.network : Create required directories --------------------- 1.00s 2026-02-19 02:59:56.445290 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.82s 2026-02-19 02:59:56.740540 | orchestrator | + osism apply wireguard 2026-02-19 03:00:08.742887 | orchestrator | 2026-02-19 03:00:08 | INFO  | Task 3913ae3a-7f27-469f-b811-58a7fad1ba8c (wireguard) was prepared for execution. 2026-02-19 03:00:08.742999 | orchestrator | 2026-02-19 03:00:08 | INFO  | It takes a moment until task 3913ae3a-7f27-469f-b811-58a7fad1ba8c (wireguard) has been started and output is visible here. 2026-02-19 03:00:28.186506 | orchestrator | 2026-02-19 03:00:28.186589 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-19 03:00:28.186615 | orchestrator | 2026-02-19 03:00:28.186620 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-19 03:00:28.186625 | orchestrator | Thursday 19 February 2026 03:00:13 +0000 (0:00:00.226) 0:00:00.226 ***** 2026-02-19 03:00:28.186630 | orchestrator | ok: [testbed-manager] 2026-02-19 03:00:28.186636 | orchestrator | 2026-02-19 03:00:28.186640 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-19 03:00:28.186644 | orchestrator | Thursday 19 February 2026 03:00:14 +0000 (0:00:01.439) 0:00:01.665 ***** 2026-02-19 03:00:28.186649 | orchestrator | changed: [testbed-manager] 2026-02-19 03:00:28.186658 | orchestrator | 2026-02-19 03:00:28.186662 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-19 03:00:28.186667 | orchestrator | Thursday 19 February 2026 03:00:20 +0000 (0:00:06.199) 0:00:07.865 ***** 2026-02-19 03:00:28.186671 | orchestrator | changed: [testbed-manager] 2026-02-19 03:00:28.186675 | orchestrator | 2026-02-19 03:00:28.186695 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-19 03:00:28.186700 | orchestrator | Thursday 19 February 2026 03:00:21 +0000 (0:00:00.539) 0:00:08.405 ***** 2026-02-19 03:00:28.186705 | orchestrator | changed: [testbed-manager] 2026-02-19 03:00:28.186709 | orchestrator | 2026-02-19 03:00:28.186713 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-19 03:00:28.186717 | orchestrator | Thursday 19 February 2026 03:00:21 +0000 (0:00:00.434) 0:00:08.839 ***** 2026-02-19 03:00:28.186721 | orchestrator | ok: [testbed-manager] 2026-02-19 03:00:28.186727 | orchestrator | 2026-02-19 03:00:28.186733 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-19 03:00:28.186742 | orchestrator | Thursday 19 February 2026 03:00:22 +0000 (0:00:00.683) 0:00:09.522 ***** 2026-02-19 03:00:28.186751 | orchestrator | ok: [testbed-manager] 2026-02-19 03:00:28.186758 | orchestrator | 2026-02-19 03:00:28.186765 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-19 03:00:28.186771 | orchestrator | Thursday 19 February 2026 03:00:22 +0000 (0:00:00.437) 0:00:09.960 ***** 2026-02-19 03:00:28.186777 | orchestrator | ok: [testbed-manager] 2026-02-19 03:00:28.186783 | orchestrator | 2026-02-19 03:00:28.186789 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-19 03:00:28.186796 | orchestrator | Thursday 19 February 2026 03:00:23 +0000 (0:00:00.447) 0:00:10.408 ***** 2026-02-19 03:00:28.186802 | orchestrator | changed: [testbed-manager] 2026-02-19 03:00:28.186809 | orchestrator | 2026-02-19 03:00:28.186816 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-19 03:00:28.186822 | orchestrator | Thursday 19 February 2026 03:00:24 +0000 (0:00:01.064) 0:00:11.473 ***** 2026-02-19 03:00:28.186829 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-19 03:00:28.186836 | orchestrator | changed: [testbed-manager] 2026-02-19 03:00:28.186843 | orchestrator | 2026-02-19 03:00:28.186850 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-19 03:00:28.186856 | orchestrator | Thursday 19 February 2026 03:00:25 +0000 (0:00:00.840) 0:00:12.314 ***** 2026-02-19 03:00:28.186863 | orchestrator | changed: [testbed-manager] 2026-02-19 03:00:28.186869 | orchestrator | 2026-02-19 03:00:28.186874 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-19 03:00:28.186879 | orchestrator | Thursday 19 February 2026 03:00:26 +0000 (0:00:01.650) 0:00:13.965 ***** 2026-02-19 03:00:28.186883 | orchestrator | changed: [testbed-manager] 2026-02-19 03:00:28.186887 | orchestrator | 2026-02-19 03:00:28.186892 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:00:28.186896 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:00:28.186902 | orchestrator | 2026-02-19 03:00:28.186906 | orchestrator | 2026-02-19 03:00:28.186913 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:00:28.186926 | orchestrator | Thursday 19 February 2026 03:00:27 +0000 (0:00:00.956) 0:00:14.921 ***** 2026-02-19 03:00:28.186937 | orchestrator | =============================================================================== 2026-02-19 03:00:28.186944 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.20s 2026-02-19 03:00:28.186950 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2026-02-19 03:00:28.186957 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2026-02-19 03:00:28.186963 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.06s 2026-02-19 03:00:28.186970 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2026-02-19 03:00:28.186976 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.84s 2026-02-19 03:00:28.186982 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-02-19 03:00:28.186988 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2026-02-19 03:00:28.186994 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-02-19 03:00:28.187001 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-02-19 03:00:28.187007 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-02-19 03:00:28.508265 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-19 03:00:28.543642 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-19 03:00:28.543807 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-19 03:00:28.624016 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 175 0 --:--:-- --:--:-- --:--:-- 177 2026-02-19 03:00:28.639676 | orchestrator | + osism apply --environment custom workarounds 2026-02-19 03:00:30.529733 | orchestrator | 2026-02-19 03:00:30 | INFO  | Trying to run play workarounds in environment custom 2026-02-19 03:00:40.659787 | orchestrator | 2026-02-19 03:00:40 | INFO  | Task b2101ea7-02e1-462b-8244-70053ae19825 (workarounds) was prepared for execution. 2026-02-19 03:00:40.659908 | orchestrator | 2026-02-19 03:00:40 | INFO  | It takes a moment until task b2101ea7-02e1-462b-8244-70053ae19825 (workarounds) has been started and output is visible here. 2026-02-19 03:01:05.654449 | orchestrator | 2026-02-19 03:01:05.654576 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:01:05.654594 | orchestrator | 2026-02-19 03:01:05.654606 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-19 03:01:05.654618 | orchestrator | Thursday 19 February 2026 03:00:44 +0000 (0:00:00.134) 0:00:00.134 ***** 2026-02-19 03:01:05.654631 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-19 03:01:05.654642 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-19 03:01:05.654688 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-19 03:01:05.654707 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-19 03:01:05.654719 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-19 03:01:05.654730 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-19 03:01:05.654741 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-19 03:01:05.654752 | orchestrator | 2026-02-19 03:01:05.654763 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-19 03:01:05.654773 | orchestrator | 2026-02-19 03:01:05.654784 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-19 03:01:05.654795 | orchestrator | Thursday 19 February 2026 03:00:45 +0000 (0:00:00.777) 0:00:00.911 ***** 2026-02-19 03:01:05.654807 | orchestrator | ok: [testbed-manager] 2026-02-19 03:01:05.654839 | orchestrator | 2026-02-19 03:01:05.654851 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-19 03:01:05.654862 | orchestrator | 2026-02-19 03:01:05.654873 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-19 03:01:05.654884 | orchestrator | Thursday 19 February 2026 03:00:47 +0000 (0:00:02.208) 0:00:03.120 ***** 2026-02-19 03:01:05.654895 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:01:05.654905 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:01:05.654916 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:01:05.654926 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:01:05.654937 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:01:05.654948 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:01:05.654958 | orchestrator | 2026-02-19 03:01:05.654973 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-19 03:01:05.654986 | orchestrator | 2026-02-19 03:01:05.654998 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-19 03:01:05.655019 | orchestrator | Thursday 19 February 2026 03:00:49 +0000 (0:00:01.965) 0:00:05.086 ***** 2026-02-19 03:01:05.655033 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-19 03:01:05.655048 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-19 03:01:05.655061 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-19 03:01:05.655074 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-19 03:01:05.655087 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-19 03:01:05.655100 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-19 03:01:05.655113 | orchestrator | 2026-02-19 03:01:05.655126 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-19 03:01:05.655138 | orchestrator | Thursday 19 February 2026 03:00:51 +0000 (0:00:01.543) 0:00:06.630 ***** 2026-02-19 03:01:05.655151 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:01:05.655164 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:01:05.655177 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:01:05.655189 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:01:05.655202 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:01:05.655215 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:01:05.655227 | orchestrator | 2026-02-19 03:01:05.655240 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-19 03:01:05.655253 | orchestrator | Thursday 19 February 2026 03:00:54 +0000 (0:00:03.604) 0:00:10.234 ***** 2026-02-19 03:01:05.655266 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:01:05.655279 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:01:05.655291 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:01:05.655305 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:01:05.655318 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:01:05.655330 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:01:05.655341 | orchestrator | 2026-02-19 03:01:05.655352 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-19 03:01:05.655362 | orchestrator | 2026-02-19 03:01:05.655373 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-19 03:01:05.655384 | orchestrator | Thursday 19 February 2026 03:00:55 +0000 (0:00:00.763) 0:00:10.998 ***** 2026-02-19 03:01:05.655395 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:01:05.655405 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:01:05.655416 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:01:05.655427 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:01:05.655437 | orchestrator | changed: [testbed-manager] 2026-02-19 03:01:05.655448 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:01:05.655466 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:01:05.655477 | orchestrator | 2026-02-19 03:01:05.655488 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-19 03:01:05.655498 | orchestrator | Thursday 19 February 2026 03:00:57 +0000 (0:00:01.543) 0:00:12.541 ***** 2026-02-19 03:01:05.655509 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:01:05.655520 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:01:05.655530 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:01:05.655541 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:01:05.655552 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:01:05.655562 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:01:05.655590 | orchestrator | changed: [testbed-manager] 2026-02-19 03:01:05.655602 | orchestrator | 2026-02-19 03:01:05.655612 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-19 03:01:05.655623 | orchestrator | Thursday 19 February 2026 03:00:58 +0000 (0:00:01.631) 0:00:14.172 ***** 2026-02-19 03:01:05.655634 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:01:05.655645 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:01:05.655679 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:01:05.655691 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:01:05.655702 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:01:05.655713 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:01:05.655724 | orchestrator | ok: [testbed-manager] 2026-02-19 03:01:05.655735 | orchestrator | 2026-02-19 03:01:05.655746 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-19 03:01:05.655757 | orchestrator | Thursday 19 February 2026 03:01:00 +0000 (0:00:01.569) 0:00:15.742 ***** 2026-02-19 03:01:05.655768 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:01:05.655779 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:01:05.655790 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:01:05.655801 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:01:05.655812 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:01:05.655822 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:01:05.655833 | orchestrator | changed: [testbed-manager] 2026-02-19 03:01:05.655844 | orchestrator | 2026-02-19 03:01:05.655855 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-19 03:01:05.655866 | orchestrator | Thursday 19 February 2026 03:01:02 +0000 (0:00:01.739) 0:00:17.481 ***** 2026-02-19 03:01:05.655877 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:01:05.655888 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:01:05.655899 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:01:05.655910 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:01:05.655921 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:01:05.655932 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:01:05.655943 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:01:05.655954 | orchestrator | 2026-02-19 03:01:05.655965 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-19 03:01:05.655976 | orchestrator | 2026-02-19 03:01:05.655987 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-19 03:01:05.655998 | orchestrator | Thursday 19 February 2026 03:01:02 +0000 (0:00:00.584) 0:00:18.066 ***** 2026-02-19 03:01:05.656009 | orchestrator | ok: [testbed-manager] 2026-02-19 03:01:05.656020 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:01:05.656030 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:01:05.656041 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:01:05.656052 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:01:05.656068 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:01:05.656079 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:01:05.656089 | orchestrator | 2026-02-19 03:01:05.656101 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:01:05.656112 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:01:05.656124 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:05.656144 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:05.656155 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:05.656166 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:05.656177 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:05.656188 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:05.656199 | orchestrator | 2026-02-19 03:01:05.656210 | orchestrator | 2026-02-19 03:01:05.656221 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:01:05.656232 | orchestrator | Thursday 19 February 2026 03:01:05 +0000 (0:00:02.984) 0:00:21.050 ***** 2026-02-19 03:01:05.656243 | orchestrator | =============================================================================== 2026-02-19 03:01:05.656254 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.60s 2026-02-19 03:01:05.656265 | orchestrator | Install python3-docker -------------------------------------------------- 2.98s 2026-02-19 03:01:05.656276 | orchestrator | Apply netplan configuration --------------------------------------------- 2.21s 2026-02-19 03:01:05.656287 | orchestrator | Apply netplan configuration --------------------------------------------- 1.97s 2026-02-19 03:01:05.656298 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2026-02-19 03:01:05.656309 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2026-02-19 03:01:05.656320 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.57s 2026-02-19 03:01:05.656330 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.54s 2026-02-19 03:01:05.656341 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.54s 2026-02-19 03:01:05.656352 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2026-02-19 03:01:05.656363 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.76s 2026-02-19 03:01:05.656380 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.58s 2026-02-19 03:01:06.263982 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-19 03:01:18.392624 | orchestrator | 2026-02-19 03:01:18 | INFO  | Task b32c98ad-edbc-4f88-b89d-babf539ac901 (reboot) was prepared for execution. 2026-02-19 03:01:18.392826 | orchestrator | 2026-02-19 03:01:18 | INFO  | It takes a moment until task b32c98ad-edbc-4f88-b89d-babf539ac901 (reboot) has been started and output is visible here. 2026-02-19 03:01:28.120125 | orchestrator | 2026-02-19 03:01:28.120206 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-19 03:01:28.120213 | orchestrator | 2026-02-19 03:01:28.120218 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-19 03:01:28.120223 | orchestrator | Thursday 19 February 2026 03:01:22 +0000 (0:00:00.148) 0:00:00.148 ***** 2026-02-19 03:01:28.120227 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:01:28.120233 | orchestrator | 2026-02-19 03:01:28.120237 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-19 03:01:28.120241 | orchestrator | Thursday 19 February 2026 03:01:22 +0000 (0:00:00.083) 0:00:00.231 ***** 2026-02-19 03:01:28.120244 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:01:28.120248 | orchestrator | 2026-02-19 03:01:28.120252 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-19 03:01:28.120273 | orchestrator | Thursday 19 February 2026 03:01:23 +0000 (0:00:00.909) 0:00:01.141 ***** 2026-02-19 03:01:28.120277 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:01:28.120281 | orchestrator | 2026-02-19 03:01:28.120285 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-19 03:01:28.120288 | orchestrator | 2026-02-19 03:01:28.120292 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-19 03:01:28.120296 | orchestrator | Thursday 19 February 2026 03:01:23 +0000 (0:00:00.106) 0:00:01.247 ***** 2026-02-19 03:01:28.120300 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:01:28.120304 | orchestrator | 2026-02-19 03:01:28.120308 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-19 03:01:28.120312 | orchestrator | Thursday 19 February 2026 03:01:23 +0000 (0:00:00.087) 0:00:01.335 ***** 2026-02-19 03:01:28.120315 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:01:28.120319 | orchestrator | 2026-02-19 03:01:28.120323 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-19 03:01:28.120337 | orchestrator | Thursday 19 February 2026 03:01:24 +0000 (0:00:00.667) 0:00:02.002 ***** 2026-02-19 03:01:28.120340 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:01:28.120344 | orchestrator | 2026-02-19 03:01:28.120348 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-19 03:01:28.120352 | orchestrator | 2026-02-19 03:01:28.120356 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-19 03:01:28.120359 | orchestrator | Thursday 19 February 2026 03:01:24 +0000 (0:00:00.108) 0:00:02.110 ***** 2026-02-19 03:01:28.120363 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:01:28.120367 | orchestrator | 2026-02-19 03:01:28.120370 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-19 03:01:28.120374 | orchestrator | Thursday 19 February 2026 03:01:24 +0000 (0:00:00.160) 0:00:02.271 ***** 2026-02-19 03:01:28.120378 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:01:28.120382 | orchestrator | 2026-02-19 03:01:28.120386 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-19 03:01:28.120390 | orchestrator | Thursday 19 February 2026 03:01:25 +0000 (0:00:00.695) 0:00:02.967 ***** 2026-02-19 03:01:28.120393 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:01:28.120397 | orchestrator | 2026-02-19 03:01:28.120401 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-19 03:01:28.120405 | orchestrator | 2026-02-19 03:01:28.120408 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-19 03:01:28.120412 | orchestrator | Thursday 19 February 2026 03:01:25 +0000 (0:00:00.119) 0:00:03.086 ***** 2026-02-19 03:01:28.120416 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:01:28.120420 | orchestrator | 2026-02-19 03:01:28.120423 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-19 03:01:28.120427 | orchestrator | Thursday 19 February 2026 03:01:25 +0000 (0:00:00.099) 0:00:03.185 ***** 2026-02-19 03:01:28.120431 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:01:28.120435 | orchestrator | 2026-02-19 03:01:28.120439 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-19 03:01:28.120442 | orchestrator | Thursday 19 February 2026 03:01:26 +0000 (0:00:00.689) 0:00:03.874 ***** 2026-02-19 03:01:28.120446 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:01:28.120450 | orchestrator | 2026-02-19 03:01:28.120453 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-19 03:01:28.120457 | orchestrator | 2026-02-19 03:01:28.120461 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-19 03:01:28.120465 | orchestrator | Thursday 19 February 2026 03:01:26 +0000 (0:00:00.120) 0:00:03.994 ***** 2026-02-19 03:01:28.120468 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:01:28.120472 | orchestrator | 2026-02-19 03:01:28.120476 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-19 03:01:28.120484 | orchestrator | Thursday 19 February 2026 03:01:26 +0000 (0:00:00.104) 0:00:04.099 ***** 2026-02-19 03:01:28.120488 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:01:28.120492 | orchestrator | 2026-02-19 03:01:28.120496 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-19 03:01:28.120502 | orchestrator | Thursday 19 February 2026 03:01:27 +0000 (0:00:00.708) 0:00:04.808 ***** 2026-02-19 03:01:28.120508 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:01:28.120514 | orchestrator | 2026-02-19 03:01:28.120520 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-19 03:01:28.120526 | orchestrator | 2026-02-19 03:01:28.120532 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-19 03:01:28.120538 | orchestrator | Thursday 19 February 2026 03:01:27 +0000 (0:00:00.123) 0:00:04.931 ***** 2026-02-19 03:01:28.120543 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:01:28.120550 | orchestrator | 2026-02-19 03:01:28.120555 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-19 03:01:28.120561 | orchestrator | Thursday 19 February 2026 03:01:27 +0000 (0:00:00.109) 0:00:05.040 ***** 2026-02-19 03:01:28.120567 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:01:28.120574 | orchestrator | 2026-02-19 03:01:28.120580 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-19 03:01:28.120586 | orchestrator | Thursday 19 February 2026 03:01:27 +0000 (0:00:00.638) 0:00:05.678 ***** 2026-02-19 03:01:28.120605 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:01:28.120611 | orchestrator | 2026-02-19 03:01:28.120615 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:01:28.120619 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:28.120624 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:28.120628 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:28.120632 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:28.120636 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:28.120685 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:01:28.120690 | orchestrator | 2026-02-19 03:01:28.120695 | orchestrator | 2026-02-19 03:01:28.120700 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:01:28.120704 | orchestrator | Thursday 19 February 2026 03:01:27 +0000 (0:00:00.031) 0:00:05.710 ***** 2026-02-19 03:01:28.120715 | orchestrator | =============================================================================== 2026-02-19 03:01:28.120722 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2026-02-19 03:01:28.120729 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.65s 2026-02-19 03:01:28.120736 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2026-02-19 03:01:28.315168 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-19 03:01:40.115919 | orchestrator | 2026-02-19 03:01:40 | INFO  | Task 578493b1-b53e-47cf-b288-7e54488bd9db (wait-for-connection) was prepared for execution. 2026-02-19 03:01:40.116020 | orchestrator | 2026-02-19 03:01:40 | INFO  | It takes a moment until task 578493b1-b53e-47cf-b288-7e54488bd9db (wait-for-connection) has been started and output is visible here. 2026-02-19 03:01:55.917818 | orchestrator | 2026-02-19 03:01:55.917960 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-19 03:01:55.917980 | orchestrator | 2026-02-19 03:01:55.917993 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-19 03:01:55.918005 | orchestrator | Thursday 19 February 2026 03:01:43 +0000 (0:00:00.239) 0:00:00.239 ***** 2026-02-19 03:01:55.918078 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:01:55.918093 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:01:55.918104 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:01:55.918116 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:01:55.918127 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:01:55.918137 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:01:55.918149 | orchestrator | 2026-02-19 03:01:55.918160 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:01:55.918171 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:01:55.918184 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:01:55.918196 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:01:55.918207 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:01:55.918218 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:01:55.918229 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:01:55.918240 | orchestrator | 2026-02-19 03:01:55.918251 | orchestrator | 2026-02-19 03:01:55.918263 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:01:55.918274 | orchestrator | Thursday 19 February 2026 03:01:55 +0000 (0:00:11.537) 0:00:11.777 ***** 2026-02-19 03:01:55.918288 | orchestrator | =============================================================================== 2026-02-19 03:01:55.918301 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2026-02-19 03:01:56.252420 | orchestrator | + osism apply hddtemp 2026-02-19 03:02:08.369147 | orchestrator | 2026-02-19 03:02:08 | INFO  | Task 43e4b0c8-29cf-4530-b38a-387281504954 (hddtemp) was prepared for execution. 2026-02-19 03:02:08.369290 | orchestrator | 2026-02-19 03:02:08 | INFO  | It takes a moment until task 43e4b0c8-29cf-4530-b38a-387281504954 (hddtemp) has been started and output is visible here. 2026-02-19 03:02:37.819555 | orchestrator | 2026-02-19 03:02:37.819696 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-19 03:02:37.819715 | orchestrator | 2026-02-19 03:02:37.819727 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-19 03:02:37.819739 | orchestrator | Thursday 19 February 2026 03:02:12 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-19 03:02:37.819750 | orchestrator | ok: [testbed-manager] 2026-02-19 03:02:37.819763 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:02:37.819774 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:02:37.819785 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:02:37.819795 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:02:37.819806 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:02:37.819817 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:02:37.819828 | orchestrator | 2026-02-19 03:02:37.819839 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-19 03:02:37.819850 | orchestrator | Thursday 19 February 2026 03:02:13 +0000 (0:00:00.756) 0:00:01.018 ***** 2026-02-19 03:02:37.819862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:02:37.819899 | orchestrator | 2026-02-19 03:02:37.819911 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-19 03:02:37.819922 | orchestrator | Thursday 19 February 2026 03:02:14 +0000 (0:00:01.267) 0:00:02.285 ***** 2026-02-19 03:02:37.819933 | orchestrator | ok: [testbed-manager] 2026-02-19 03:02:37.819944 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:02:37.819954 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:02:37.819965 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:02:37.819977 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:02:37.819988 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:02:37.819999 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:02:37.820010 | orchestrator | 2026-02-19 03:02:37.820021 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-19 03:02:37.820046 | orchestrator | Thursday 19 February 2026 03:02:16 +0000 (0:00:02.074) 0:00:04.360 ***** 2026-02-19 03:02:37.820058 | orchestrator | changed: [testbed-manager] 2026-02-19 03:02:37.820070 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:02:37.820081 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:02:37.820092 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:02:37.820103 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:02:37.820114 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:02:37.820124 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:02:37.820135 | orchestrator | 2026-02-19 03:02:37.820146 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-19 03:02:37.820157 | orchestrator | Thursday 19 February 2026 03:02:17 +0000 (0:00:01.179) 0:00:05.540 ***** 2026-02-19 03:02:37.820168 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:02:37.820179 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:02:37.820190 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:02:37.820201 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:02:37.820212 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:02:37.820223 | orchestrator | ok: [testbed-manager] 2026-02-19 03:02:37.820233 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:02:37.820244 | orchestrator | 2026-02-19 03:02:37.820255 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-19 03:02:37.820266 | orchestrator | Thursday 19 February 2026 03:02:19 +0000 (0:00:01.179) 0:00:06.720 ***** 2026-02-19 03:02:37.820277 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:02:37.820288 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:02:37.820299 | orchestrator | changed: [testbed-manager] 2026-02-19 03:02:37.820310 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:02:37.820321 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:02:37.820332 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:02:37.820343 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:02:37.820353 | orchestrator | 2026-02-19 03:02:37.820365 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-19 03:02:37.820376 | orchestrator | Thursday 19 February 2026 03:02:20 +0000 (0:00:00.857) 0:00:07.577 ***** 2026-02-19 03:02:37.820386 | orchestrator | changed: [testbed-manager] 2026-02-19 03:02:37.820397 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:02:37.820408 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:02:37.820419 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:02:37.820430 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:02:37.820440 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:02:37.820451 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:02:37.820462 | orchestrator | 2026-02-19 03:02:37.820473 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-19 03:02:37.820484 | orchestrator | Thursday 19 February 2026 03:02:34 +0000 (0:00:14.697) 0:00:22.274 ***** 2026-02-19 03:02:37.820496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:02:37.820515 | orchestrator | 2026-02-19 03:02:37.820526 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-19 03:02:37.820537 | orchestrator | Thursday 19 February 2026 03:02:35 +0000 (0:00:01.128) 0:00:23.402 ***** 2026-02-19 03:02:37.820548 | orchestrator | changed: [testbed-manager] 2026-02-19 03:02:37.820559 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:02:37.820570 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:02:37.820581 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:02:37.820592 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:02:37.820603 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:02:37.820614 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:02:37.820640 | orchestrator | 2026-02-19 03:02:37.820651 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:02:37.820663 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:02:37.820690 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:02:37.820703 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:02:37.820714 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:02:37.820725 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:02:37.820736 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:02:37.820746 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:02:37.820757 | orchestrator | 2026-02-19 03:02:37.820768 | orchestrator | 2026-02-19 03:02:37.820779 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:02:37.820790 | orchestrator | Thursday 19 February 2026 03:02:37 +0000 (0:00:01.749) 0:00:25.152 ***** 2026-02-19 03:02:37.820801 | orchestrator | =============================================================================== 2026-02-19 03:02:37.820811 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.70s 2026-02-19 03:02:37.820822 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.07s 2026-02-19 03:02:37.820832 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.75s 2026-02-19 03:02:37.820848 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.27s 2026-02-19 03:02:37.820859 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.18s 2026-02-19 03:02:37.820870 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2026-02-19 03:02:37.820881 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.13s 2026-02-19 03:02:37.820891 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.86s 2026-02-19 03:02:37.820902 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.76s 2026-02-19 03:02:38.005255 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-19 03:02:38.060555 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 03:02:38.060683 | orchestrator | + sudo systemctl restart manager.service 2026-02-19 03:02:55.122444 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-19 03:02:55.122534 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-19 03:02:55.122544 | orchestrator | + local max_attempts=60 2026-02-19 03:02:55.122553 | orchestrator | + local name=ceph-ansible 2026-02-19 03:02:55.122561 | orchestrator | + local attempt_num=1 2026-02-19 03:02:55.122568 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:02:55.164086 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:02:55.164182 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:02:55.164197 | orchestrator | + sleep 5 2026-02-19 03:03:00.172935 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:00.290002 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:00.290176 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:00.290192 | orchestrator | + sleep 5 2026-02-19 03:03:05.295727 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:05.325302 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:05.325373 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:05.325381 | orchestrator | + sleep 5 2026-02-19 03:03:10.329107 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:10.370524 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:10.370685 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:10.370712 | orchestrator | + sleep 5 2026-02-19 03:03:15.374839 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:15.415243 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:15.415307 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:15.415313 | orchestrator | + sleep 5 2026-02-19 03:03:20.419444 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:20.456840 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:20.456932 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:20.456953 | orchestrator | + sleep 5 2026-02-19 03:03:25.462250 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:25.499924 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:25.499999 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:25.500008 | orchestrator | + sleep 5 2026-02-19 03:03:30.506157 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:30.539147 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:30.539213 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:30.539219 | orchestrator | + sleep 5 2026-02-19 03:03:35.543204 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:35.575004 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:35.575092 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:35.575105 | orchestrator | + sleep 5 2026-02-19 03:03:40.578305 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:40.602694 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:40.602831 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:40.602852 | orchestrator | + sleep 5 2026-02-19 03:03:45.607068 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:45.644860 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:45.644930 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:45.644938 | orchestrator | + sleep 5 2026-02-19 03:03:50.650112 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:50.685820 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:50.685902 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:50.685911 | orchestrator | + sleep 5 2026-02-19 03:03:55.689925 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:03:55.727117 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-19 03:03:55.727196 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-19 03:03:55.727206 | orchestrator | + sleep 5 2026-02-19 03:04:00.731968 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-19 03:04:00.774984 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:04:00.775097 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-19 03:04:00.775120 | orchestrator | + local max_attempts=60 2026-02-19 03:04:00.775467 | orchestrator | + local name=kolla-ansible 2026-02-19 03:04:00.775493 | orchestrator | + local attempt_num=1 2026-02-19 03:04:00.776484 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-19 03:04:00.810697 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:04:00.810816 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-19 03:04:00.810863 | orchestrator | + local max_attempts=60 2026-02-19 03:04:00.811218 | orchestrator | + local name=osism-ansible 2026-02-19 03:04:00.811250 | orchestrator | + local attempt_num=1 2026-02-19 03:04:00.811976 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-19 03:04:00.850822 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-19 03:04:00.850949 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-19 03:04:00.850976 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-19 03:04:01.010561 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-19 03:04:01.172508 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-19 03:04:01.308427 | orchestrator | ARA in osism-ansible already disabled. 2026-02-19 03:04:01.471098 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-19 03:04:01.471224 | orchestrator | + osism apply gather-facts 2026-02-19 03:04:13.362909 | orchestrator | 2026-02-19 03:04:13 | INFO  | Task 6fa28b67-cbce-4e2e-9749-b91a6cfd12dd (gather-facts) was prepared for execution. 2026-02-19 03:04:13.362992 | orchestrator | 2026-02-19 03:04:13 | INFO  | It takes a moment until task 6fa28b67-cbce-4e2e-9749-b91a6cfd12dd (gather-facts) has been started and output is visible here. 2026-02-19 03:04:26.956186 | orchestrator | 2026-02-19 03:04:26.956288 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-19 03:04:26.956304 | orchestrator | 2026-02-19 03:04:26.956316 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-19 03:04:26.956327 | orchestrator | Thursday 19 February 2026 03:04:17 +0000 (0:00:00.191) 0:00:00.191 ***** 2026-02-19 03:04:26.956339 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:04:26.956350 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:04:26.956360 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:04:26.956371 | orchestrator | ok: [testbed-manager] 2026-02-19 03:04:26.956381 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:04:26.956391 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:04:26.956402 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:04:26.956412 | orchestrator | 2026-02-19 03:04:26.956422 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-19 03:04:26.956433 | orchestrator | 2026-02-19 03:04:26.956443 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-19 03:04:26.956453 | orchestrator | Thursday 19 February 2026 03:04:26 +0000 (0:00:08.830) 0:00:09.021 ***** 2026-02-19 03:04:26.956464 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:04:26.956475 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:04:26.956485 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:04:26.956496 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:04:26.956506 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:04:26.956516 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:04:26.956526 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:04:26.956536 | orchestrator | 2026-02-19 03:04:26.956547 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:04:26.956557 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:04:26.956569 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:04:26.956579 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:04:26.956590 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:04:26.956600 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:04:26.956640 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:04:26.956677 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:04:26.956687 | orchestrator | 2026-02-19 03:04:26.956697 | orchestrator | 2026-02-19 03:04:26.956707 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:04:26.956717 | orchestrator | Thursday 19 February 2026 03:04:26 +0000 (0:00:00.485) 0:00:09.507 ***** 2026-02-19 03:04:26.956727 | orchestrator | =============================================================================== 2026-02-19 03:04:26.956739 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.83s 2026-02-19 03:04:26.956751 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-02-19 03:04:27.177316 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-19 03:04:27.187063 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-19 03:04:27.194967 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-19 03:04:27.206339 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-19 03:04:27.214106 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-19 03:04:27.223919 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-19 03:04:27.231363 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-19 03:04:27.241419 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-19 03:04:27.251359 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-19 03:04:27.259902 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-19 03:04:27.269181 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-19 03:04:27.278241 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-19 03:04:27.286342 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-19 03:04:27.295024 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-19 03:04:27.303847 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-19 03:04:27.313301 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-19 03:04:27.322548 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-19 03:04:27.332169 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-19 03:04:27.347232 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-19 03:04:27.362955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-19 03:04:27.372376 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-19 03:04:27.384243 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-19 03:04:27.394925 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-19 03:04:27.407359 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-19 03:04:27.519197 | orchestrator | ok: Runtime: 0:24:18.768158 2026-02-19 03:04:27.619761 | 2026-02-19 03:04:27.619919 | TASK [Deploy services] 2026-02-19 03:04:28.366307 | orchestrator | 2026-02-19 03:04:28.366545 | orchestrator | # DEPLOY SERVICES 2026-02-19 03:04:28.366573 | orchestrator | 2026-02-19 03:04:28.366584 | orchestrator | + set -e 2026-02-19 03:04:28.366594 | orchestrator | + echo 2026-02-19 03:04:28.366603 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-19 03:04:28.366650 | orchestrator | + echo 2026-02-19 03:04:28.366684 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 03:04:28.366701 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 03:04:28.366711 | orchestrator | ++ INTERACTIVE=false 2026-02-19 03:04:28.366720 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 03:04:28.366743 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 03:04:28.366751 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 03:04:28.366761 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 03:04:28.366769 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 03:04:28.366793 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 03:04:28.366801 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 03:04:28.366811 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 03:04:28.366819 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 03:04:28.366829 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 03:04:28.366836 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 03:04:28.366843 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 03:04:28.366852 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 03:04:28.366859 | orchestrator | ++ export ARA=false 2026-02-19 03:04:28.366866 | orchestrator | ++ ARA=false 2026-02-19 03:04:28.366874 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 03:04:28.366881 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 03:04:28.366888 | orchestrator | ++ export TEMPEST=false 2026-02-19 03:04:28.366895 | orchestrator | ++ TEMPEST=false 2026-02-19 03:04:28.366902 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 03:04:28.366909 | orchestrator | ++ IS_ZUUL=true 2026-02-19 03:04:28.366917 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 03:04:28.366924 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 03:04:28.366931 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 03:04:28.366939 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 03:04:28.366946 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 03:04:28.366957 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 03:04:28.366971 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 03:04:28.366990 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 03:04:28.367001 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 03:04:28.367021 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 03:04:28.367033 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-19 03:04:28.378918 | orchestrator | + set -e 2026-02-19 03:04:28.380202 | orchestrator | 2026-02-19 03:04:28.380306 | orchestrator | # PULL IMAGES 2026-02-19 03:04:28.380322 | orchestrator | 2026-02-19 03:04:28.380334 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 03:04:28.380349 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 03:04:28.380360 | orchestrator | ++ INTERACTIVE=false 2026-02-19 03:04:28.380371 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 03:04:28.380382 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 03:04:28.380392 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 03:04:28.380403 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 03:04:28.380414 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 03:04:28.380425 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 03:04:28.380436 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 03:04:28.380447 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 03:04:28.380457 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 03:04:28.380468 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 03:04:28.380480 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 03:04:28.380491 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 03:04:28.380501 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 03:04:28.380512 | orchestrator | ++ export ARA=false 2026-02-19 03:04:28.380523 | orchestrator | ++ ARA=false 2026-02-19 03:04:28.380539 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 03:04:28.380549 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 03:04:28.380560 | orchestrator | ++ export TEMPEST=false 2026-02-19 03:04:28.380570 | orchestrator | ++ TEMPEST=false 2026-02-19 03:04:28.380581 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 03:04:28.380592 | orchestrator | ++ IS_ZUUL=true 2026-02-19 03:04:28.380602 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 03:04:28.380728 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 03:04:28.380741 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 03:04:28.380752 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 03:04:28.380763 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 03:04:28.380774 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 03:04:28.380816 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 03:04:28.380828 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 03:04:28.380839 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 03:04:28.380849 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 03:04:28.380860 | orchestrator | + echo 2026-02-19 03:04:28.380871 | orchestrator | + echo '# PULL IMAGES' 2026-02-19 03:04:28.380882 | orchestrator | + echo 2026-02-19 03:04:28.380904 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-19 03:04:28.437344 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 03:04:28.437435 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-19 03:04:30.092577 | orchestrator | 2026-02-19 03:04:30 | INFO  | Trying to run play pull-images in environment custom 2026-02-19 03:04:40.236258 | orchestrator | 2026-02-19 03:04:40 | INFO  | Task cb549fc1-1d62-4f61-bf24-5de7592c06f7 (pull-images) was prepared for execution. 2026-02-19 03:04:40.236362 | orchestrator | 2026-02-19 03:04:40 | INFO  | Task cb549fc1-1d62-4f61-bf24-5de7592c06f7 is running in background. No more output. Check ARA for logs. 2026-02-19 03:04:40.548307 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-19 03:04:52.592405 | orchestrator | 2026-02-19 03:04:52 | INFO  | Task 9b44f32e-04b7-4dca-8114-9869ce8c8e88 (cgit) was prepared for execution. 2026-02-19 03:04:52.592511 | orchestrator | 2026-02-19 03:04:52 | INFO  | Task 9b44f32e-04b7-4dca-8114-9869ce8c8e88 is running in background. No more output. Check ARA for logs. 2026-02-19 03:05:04.759411 | orchestrator | 2026-02-19 03:05:04 | INFO  | Task cc2f6221-857f-4302-ab32-ddbe55519e84 (dotfiles) was prepared for execution. 2026-02-19 03:05:04.759523 | orchestrator | 2026-02-19 03:05:04 | INFO  | Task cc2f6221-857f-4302-ab32-ddbe55519e84 is running in background. No more output. Check ARA for logs. 2026-02-19 03:05:17.885384 | orchestrator | 2026-02-19 03:05:17 | INFO  | Task cce8ba47-2844-400e-9efe-dd73dfcd8666 (homer) was prepared for execution. 2026-02-19 03:05:17.885878 | orchestrator | 2026-02-19 03:05:17 | INFO  | Task cce8ba47-2844-400e-9efe-dd73dfcd8666 is running in background. No more output. Check ARA for logs. 2026-02-19 03:05:30.483458 | orchestrator | 2026-02-19 03:05:30 | INFO  | Task e5571d01-3027-4ef2-9369-56438d6974c9 (phpmyadmin) was prepared for execution. 2026-02-19 03:05:30.483552 | orchestrator | 2026-02-19 03:05:30 | INFO  | Task e5571d01-3027-4ef2-9369-56438d6974c9 is running in background. No more output. Check ARA for logs. 2026-02-19 03:05:43.213953 | orchestrator | 2026-02-19 03:05:43 | INFO  | Task ad9415ef-76ac-43bf-beec-87deafff4fbc (sosreport) was prepared for execution. 2026-02-19 03:05:43.214071 | orchestrator | 2026-02-19 03:05:43 | INFO  | Task ad9415ef-76ac-43bf-beec-87deafff4fbc is running in background. No more output. Check ARA for logs. 2026-02-19 03:05:43.589838 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-19 03:05:43.597456 | orchestrator | + set -e 2026-02-19 03:05:43.597518 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 03:05:43.597526 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 03:05:43.597531 | orchestrator | ++ INTERACTIVE=false 2026-02-19 03:05:43.597537 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 03:05:43.597541 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 03:05:43.597696 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 03:05:43.597758 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 03:05:43.597764 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 03:05:43.597768 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 03:05:43.597772 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 03:05:43.597777 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 03:05:43.597781 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 03:05:43.597786 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 03:05:43.597790 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 03:05:43.597794 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 03:05:43.597798 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 03:05:43.597802 | orchestrator | ++ export ARA=false 2026-02-19 03:05:43.597806 | orchestrator | ++ ARA=false 2026-02-19 03:05:43.597810 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 03:05:43.597834 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 03:05:43.597837 | orchestrator | ++ export TEMPEST=false 2026-02-19 03:05:43.597841 | orchestrator | ++ TEMPEST=false 2026-02-19 03:05:43.597845 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 03:05:43.597849 | orchestrator | ++ IS_ZUUL=true 2026-02-19 03:05:43.597865 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 03:05:43.597872 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 03:05:43.597876 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 03:05:43.597880 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 03:05:43.597883 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 03:05:43.597887 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 03:05:43.597891 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 03:05:43.597895 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 03:05:43.597898 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 03:05:43.597902 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 03:05:43.598063 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-19 03:05:43.668763 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 03:05:43.668871 | orchestrator | + osism apply frr 2026-02-19 03:05:55.973050 | orchestrator | 2026-02-19 03:05:55 | INFO  | Task 9c022952-d330-4673-a5cb-da16c5314a1c (frr) was prepared for execution. 2026-02-19 03:05:55.973148 | orchestrator | 2026-02-19 03:05:55 | INFO  | It takes a moment until task 9c022952-d330-4673-a5cb-da16c5314a1c (frr) has been started and output is visible here. 2026-02-19 03:06:25.960007 | orchestrator | 2026-02-19 03:06:25.960080 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-19 03:06:25.960088 | orchestrator | 2026-02-19 03:06:25.960093 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-19 03:06:25.960102 | orchestrator | Thursday 19 February 2026 03:06:02 +0000 (0:00:00.182) 0:00:00.182 ***** 2026-02-19 03:06:25.960107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 03:06:25.960112 | orchestrator | 2026-02-19 03:06:25.960116 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-19 03:06:25.960120 | orchestrator | Thursday 19 February 2026 03:06:02 +0000 (0:00:00.181) 0:00:00.363 ***** 2026-02-19 03:06:25.960125 | orchestrator | changed: [testbed-manager] 2026-02-19 03:06:25.960129 | orchestrator | 2026-02-19 03:06:25.960133 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-19 03:06:25.960140 | orchestrator | Thursday 19 February 2026 03:06:03 +0000 (0:00:01.172) 0:00:01.536 ***** 2026-02-19 03:06:25.960146 | orchestrator | changed: [testbed-manager] 2026-02-19 03:06:25.960152 | orchestrator | 2026-02-19 03:06:25.960158 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-19 03:06:25.960164 | orchestrator | Thursday 19 February 2026 03:06:14 +0000 (0:00:11.304) 0:00:12.840 ***** 2026-02-19 03:06:25.960170 | orchestrator | ok: [testbed-manager] 2026-02-19 03:06:25.960178 | orchestrator | 2026-02-19 03:06:25.960187 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-19 03:06:25.960194 | orchestrator | Thursday 19 February 2026 03:06:15 +0000 (0:00:00.933) 0:00:13.774 ***** 2026-02-19 03:06:25.960200 | orchestrator | changed: [testbed-manager] 2026-02-19 03:06:25.960206 | orchestrator | 2026-02-19 03:06:25.960212 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-19 03:06:25.960218 | orchestrator | Thursday 19 February 2026 03:06:16 +0000 (0:00:00.945) 0:00:14.719 ***** 2026-02-19 03:06:25.960224 | orchestrator | ok: [testbed-manager] 2026-02-19 03:06:25.960229 | orchestrator | 2026-02-19 03:06:25.960236 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-19 03:06:25.960243 | orchestrator | Thursday 19 February 2026 03:06:17 +0000 (0:00:01.233) 0:00:15.953 ***** 2026-02-19 03:06:25.960249 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:06:25.960255 | orchestrator | 2026-02-19 03:06:25.960261 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-19 03:06:25.960268 | orchestrator | Thursday 19 February 2026 03:06:18 +0000 (0:00:00.142) 0:00:16.096 ***** 2026-02-19 03:06:25.960292 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:06:25.960299 | orchestrator | 2026-02-19 03:06:25.960305 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-19 03:06:25.960311 | orchestrator | Thursday 19 February 2026 03:06:18 +0000 (0:00:00.144) 0:00:16.240 ***** 2026-02-19 03:06:25.960317 | orchestrator | changed: [testbed-manager] 2026-02-19 03:06:25.960325 | orchestrator | 2026-02-19 03:06:25.960329 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-19 03:06:25.960333 | orchestrator | Thursday 19 February 2026 03:06:19 +0000 (0:00:00.933) 0:00:17.173 ***** 2026-02-19 03:06:25.960337 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-19 03:06:25.960341 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-19 03:06:25.960346 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-19 03:06:25.960350 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-19 03:06:25.960354 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-19 03:06:25.960358 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-19 03:06:25.960362 | orchestrator | 2026-02-19 03:06:25.960366 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-19 03:06:25.960370 | orchestrator | Thursday 19 February 2026 03:06:21 +0000 (0:00:02.046) 0:00:19.220 ***** 2026-02-19 03:06:25.960373 | orchestrator | ok: [testbed-manager] 2026-02-19 03:06:25.960377 | orchestrator | 2026-02-19 03:06:25.960381 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-19 03:06:25.960385 | orchestrator | Thursday 19 February 2026 03:06:23 +0000 (0:00:02.846) 0:00:22.066 ***** 2026-02-19 03:06:25.960388 | orchestrator | changed: [testbed-manager] 2026-02-19 03:06:25.960392 | orchestrator | 2026-02-19 03:06:25.960396 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:06:25.960400 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:06:25.960405 | orchestrator | 2026-02-19 03:06:25.960409 | orchestrator | 2026-02-19 03:06:25.960416 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:06:25.960420 | orchestrator | Thursday 19 February 2026 03:06:25 +0000 (0:00:01.520) 0:00:23.587 ***** 2026-02-19 03:06:25.960424 | orchestrator | =============================================================================== 2026-02-19 03:06:25.960427 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.30s 2026-02-19 03:06:25.960431 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.85s 2026-02-19 03:06:25.960435 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.05s 2026-02-19 03:06:25.960439 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.52s 2026-02-19 03:06:25.960442 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.23s 2026-02-19 03:06:25.960458 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.17s 2026-02-19 03:06:25.960462 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.95s 2026-02-19 03:06:25.960466 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.93s 2026-02-19 03:06:25.960469 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.93s 2026-02-19 03:06:25.960473 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.18s 2026-02-19 03:06:25.960477 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-02-19 03:06:25.960480 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-02-19 03:06:26.352055 | orchestrator | + osism apply kubernetes 2026-02-19 03:06:28.879319 | orchestrator | 2026-02-19 03:06:28 | INFO  | Task 0259214d-fc5d-4a4f-b457-2086143c7117 (kubernetes) was prepared for execution. 2026-02-19 03:06:28.879392 | orchestrator | 2026-02-19 03:06:28 | INFO  | It takes a moment until task 0259214d-fc5d-4a4f-b457-2086143c7117 (kubernetes) has been started and output is visible here. 2026-02-19 03:06:54.002859 | orchestrator | 2026-02-19 03:06:54.003006 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-19 03:06:54.003024 | orchestrator | 2026-02-19 03:06:54.003036 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-19 03:06:54.003049 | orchestrator | Thursday 19 February 2026 03:06:33 +0000 (0:00:00.174) 0:00:00.174 ***** 2026-02-19 03:06:54.003060 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:06:54.003073 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:06:54.003084 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:06:54.003096 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:06:54.003107 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:06:54.003117 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:06:54.003128 | orchestrator | 2026-02-19 03:06:54.003140 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-19 03:06:54.003151 | orchestrator | Thursday 19 February 2026 03:06:34 +0000 (0:00:00.996) 0:00:01.171 ***** 2026-02-19 03:06:54.003162 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.003174 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.003185 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.003196 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.003206 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.003217 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.003228 | orchestrator | 2026-02-19 03:06:54.003239 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-19 03:06:54.003253 | orchestrator | Thursday 19 February 2026 03:06:35 +0000 (0:00:00.633) 0:00:01.804 ***** 2026-02-19 03:06:54.003265 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.003275 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.003286 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.003297 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.003308 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.003319 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.003330 | orchestrator | 2026-02-19 03:06:54.003343 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-19 03:06:54.003356 | orchestrator | Thursday 19 February 2026 03:06:36 +0000 (0:00:00.892) 0:00:02.697 ***** 2026-02-19 03:06:54.003369 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:06:54.003382 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:06:54.003396 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:06:54.003413 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:06:54.003427 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:06:54.003446 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:06:54.003466 | orchestrator | 2026-02-19 03:06:54.003485 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-19 03:06:54.003505 | orchestrator | Thursday 19 February 2026 03:06:38 +0000 (0:00:02.599) 0:00:05.296 ***** 2026-02-19 03:06:54.003524 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:06:54.003542 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:06:54.003559 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:06:54.003578 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:06:54.003596 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:06:54.003644 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:06:54.003665 | orchestrator | 2026-02-19 03:06:54.003685 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-19 03:06:54.003702 | orchestrator | Thursday 19 February 2026 03:06:39 +0000 (0:00:01.181) 0:00:06.478 ***** 2026-02-19 03:06:54.003721 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:06:54.003785 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:06:54.003806 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:06:54.003825 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:06:54.003844 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:06:54.003862 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:06:54.003880 | orchestrator | 2026-02-19 03:06:54.003904 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-19 03:06:54.003916 | orchestrator | Thursday 19 February 2026 03:06:40 +0000 (0:00:01.031) 0:00:07.509 ***** 2026-02-19 03:06:54.003926 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.003937 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.003948 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.003959 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.003969 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.003980 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.003991 | orchestrator | 2026-02-19 03:06:54.004001 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-19 03:06:54.004018 | orchestrator | Thursday 19 February 2026 03:06:41 +0000 (0:00:00.587) 0:00:08.097 ***** 2026-02-19 03:06:54.004035 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.004062 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.004082 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.004099 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.004118 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.004136 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.004153 | orchestrator | 2026-02-19 03:06:54.004172 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-19 03:06:54.004183 | orchestrator | Thursday 19 February 2026 03:06:42 +0000 (0:00:00.773) 0:00:08.870 ***** 2026-02-19 03:06:54.004195 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 03:06:54.004206 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 03:06:54.004217 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.004228 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 03:06:54.004238 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 03:06:54.004249 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.004260 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 03:06:54.004271 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 03:06:54.004282 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.004293 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 03:06:54.004333 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 03:06:54.004344 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.004355 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 03:06:54.004366 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 03:06:54.004377 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.004388 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 03:06:54.004398 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 03:06:54.004409 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.004420 | orchestrator | 2026-02-19 03:06:54.004430 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-19 03:06:54.004441 | orchestrator | Thursday 19 February 2026 03:06:42 +0000 (0:00:00.604) 0:00:09.474 ***** 2026-02-19 03:06:54.004452 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.004463 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.004473 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.004497 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.004508 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.004518 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.004529 | orchestrator | 2026-02-19 03:06:54.004539 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-19 03:06:54.004551 | orchestrator | Thursday 19 February 2026 03:06:44 +0000 (0:00:01.081) 0:00:10.556 ***** 2026-02-19 03:06:54.004562 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:06:54.004573 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:06:54.004584 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:06:54.004594 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:06:54.004605 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:06:54.004639 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:06:54.004650 | orchestrator | 2026-02-19 03:06:54.004661 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-19 03:06:54.004671 | orchestrator | Thursday 19 February 2026 03:06:44 +0000 (0:00:00.751) 0:00:11.307 ***** 2026-02-19 03:06:54.004682 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:06:54.004693 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:06:54.004703 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:06:54.004714 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:06:54.004724 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:06:54.004735 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:06:54.004746 | orchestrator | 2026-02-19 03:06:54.004757 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-19 03:06:54.004768 | orchestrator | Thursday 19 February 2026 03:06:50 +0000 (0:00:05.469) 0:00:16.777 ***** 2026-02-19 03:06:54.004778 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.004797 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.004808 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.004819 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.004830 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.004840 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.004851 | orchestrator | 2026-02-19 03:06:54.004861 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-19 03:06:54.004872 | orchestrator | Thursday 19 February 2026 03:06:51 +0000 (0:00:00.847) 0:00:17.624 ***** 2026-02-19 03:06:54.004883 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.004893 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.004904 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.004915 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.004925 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.004936 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.004947 | orchestrator | 2026-02-19 03:06:54.004958 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-19 03:06:54.004971 | orchestrator | Thursday 19 February 2026 03:06:52 +0000 (0:00:01.276) 0:00:18.901 ***** 2026-02-19 03:06:54.004981 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.004992 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.005003 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.005013 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.005024 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.005034 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.005045 | orchestrator | 2026-02-19 03:06:54.005056 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-19 03:06:54.005066 | orchestrator | Thursday 19 February 2026 03:06:52 +0000 (0:00:00.624) 0:00:19.526 ***** 2026-02-19 03:06:54.005077 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-19 03:06:54.005097 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-19 03:06:54.005107 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:06:54.005118 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-19 03:06:54.005138 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-19 03:06:54.005148 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:06:54.005159 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-19 03:06:54.005169 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-19 03:06:54.005180 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:06:54.005191 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-19 03:06:54.005201 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-19 03:06:54.005212 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:06:54.005222 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-19 03:06:54.005233 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-19 03:06:54.005243 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:06:54.005254 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-19 03:06:54.005264 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-19 03:06:54.005276 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:06:54.005286 | orchestrator | 2026-02-19 03:06:54.005297 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-19 03:06:54.005315 | orchestrator | Thursday 19 February 2026 03:06:53 +0000 (0:00:00.996) 0:00:20.522 ***** 2026-02-19 03:08:08.213118 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:08:08.213210 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:08:08.213216 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:08:08.213221 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213225 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:08:08.213229 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213234 | orchestrator | 2026-02-19 03:08:08.213240 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-19 03:08:08.213246 | orchestrator | Thursday 19 February 2026 03:06:54 +0000 (0:00:00.500) 0:00:21.022 ***** 2026-02-19 03:08:08.213250 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:08:08.213254 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:08:08.213258 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:08:08.213262 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:08:08.213266 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213269 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213273 | orchestrator | 2026-02-19 03:08:08.213277 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-19 03:08:08.213281 | orchestrator | 2026-02-19 03:08:08.213285 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-19 03:08:08.213290 | orchestrator | Thursday 19 February 2026 03:06:55 +0000 (0:00:01.083) 0:00:22.106 ***** 2026-02-19 03:08:08.213294 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:08.213299 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:08.213303 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:08.213306 | orchestrator | 2026-02-19 03:08:08.213310 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-19 03:08:08.213314 | orchestrator | Thursday 19 February 2026 03:06:56 +0000 (0:00:01.250) 0:00:23.357 ***** 2026-02-19 03:08:08.213318 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:08.213321 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:08.213325 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:08.213329 | orchestrator | 2026-02-19 03:08:08.213333 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-19 03:08:08.213337 | orchestrator | Thursday 19 February 2026 03:06:58 +0000 (0:00:01.762) 0:00:25.119 ***** 2026-02-19 03:08:08.213340 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:08.213344 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:08.213348 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:08.213352 | orchestrator | 2026-02-19 03:08:08.213356 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-19 03:08:08.213360 | orchestrator | Thursday 19 February 2026 03:06:59 +0000 (0:00:00.842) 0:00:25.961 ***** 2026-02-19 03:08:08.213377 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:08.213381 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:08.213384 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:08.213388 | orchestrator | 2026-02-19 03:08:08.213392 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-19 03:08:08.213395 | orchestrator | Thursday 19 February 2026 03:07:00 +0000 (0:00:00.610) 0:00:26.572 ***** 2026-02-19 03:08:08.213399 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:08:08.213403 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213407 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213410 | orchestrator | 2026-02-19 03:08:08.213414 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-19 03:08:08.213430 | orchestrator | Thursday 19 February 2026 03:07:00 +0000 (0:00:00.289) 0:00:26.861 ***** 2026-02-19 03:08:08.213434 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:08.213438 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:08.213442 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:08.213445 | orchestrator | 2026-02-19 03:08:08.213449 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-19 03:08:08.213453 | orchestrator | Thursday 19 February 2026 03:07:01 +0000 (0:00:00.870) 0:00:27.732 ***** 2026-02-19 03:08:08.213456 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:08.213460 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:08.213464 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:08.213468 | orchestrator | 2026-02-19 03:08:08.213471 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-19 03:08:08.213475 | orchestrator | Thursday 19 February 2026 03:07:02 +0000 (0:00:01.369) 0:00:29.102 ***** 2026-02-19 03:08:08.213479 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:08:08.213482 | orchestrator | 2026-02-19 03:08:08.213486 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-19 03:08:08.213490 | orchestrator | Thursday 19 February 2026 03:07:03 +0000 (0:00:00.440) 0:00:29.542 ***** 2026-02-19 03:08:08.213493 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:08.213497 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:08.213501 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:08.213504 | orchestrator | 2026-02-19 03:08:08.213508 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-19 03:08:08.213513 | orchestrator | Thursday 19 February 2026 03:07:04 +0000 (0:00:01.781) 0:00:31.323 ***** 2026-02-19 03:08:08.213519 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213524 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213530 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:08.213536 | orchestrator | 2026-02-19 03:08:08.213542 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-19 03:08:08.213547 | orchestrator | Thursday 19 February 2026 03:07:05 +0000 (0:00:00.532) 0:00:31.856 ***** 2026-02-19 03:08:08.213553 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213558 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213564 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:08.213570 | orchestrator | 2026-02-19 03:08:08.213576 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-19 03:08:08.213581 | orchestrator | Thursday 19 February 2026 03:07:06 +0000 (0:00:01.053) 0:00:32.909 ***** 2026-02-19 03:08:08.213587 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213592 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213597 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:08.213603 | orchestrator | 2026-02-19 03:08:08.213609 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-19 03:08:08.213678 | orchestrator | Thursday 19 February 2026 03:07:07 +0000 (0:00:01.299) 0:00:34.209 ***** 2026-02-19 03:08:08.213688 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:08:08.213701 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213708 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213714 | orchestrator | 2026-02-19 03:08:08.213721 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-19 03:08:08.213725 | orchestrator | Thursday 19 February 2026 03:07:08 +0000 (0:00:00.343) 0:00:34.553 ***** 2026-02-19 03:08:08.213730 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:08:08.213734 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213740 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213744 | orchestrator | 2026-02-19 03:08:08.213749 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-19 03:08:08.213754 | orchestrator | Thursday 19 February 2026 03:07:08 +0000 (0:00:00.645) 0:00:35.198 ***** 2026-02-19 03:08:08.213758 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:08.213763 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:08.213767 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:08.213772 | orchestrator | 2026-02-19 03:08:08.213782 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-19 03:08:08.213786 | orchestrator | Thursday 19 February 2026 03:07:09 +0000 (0:00:01.149) 0:00:36.348 ***** 2026-02-19 03:08:08.213791 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:08.213795 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:08.213800 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:08.213804 | orchestrator | 2026-02-19 03:08:08.213808 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-19 03:08:08.213813 | orchestrator | Thursday 19 February 2026 03:07:12 +0000 (0:00:02.782) 0:00:39.130 ***** 2026-02-19 03:08:08.213817 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:08.213822 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:08.213826 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:08.213834 | orchestrator | 2026-02-19 03:08:08.213839 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-19 03:08:08.213844 | orchestrator | Thursday 19 February 2026 03:07:12 +0000 (0:00:00.285) 0:00:39.417 ***** 2026-02-19 03:08:08.213849 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-19 03:08:08.213856 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-19 03:08:08.213861 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-19 03:08:08.213865 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-19 03:08:08.213869 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-19 03:08:08.213874 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-19 03:08:08.213878 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-19 03:08:08.213883 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-19 03:08:08.213888 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-19 03:08:08.213892 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-19 03:08:08.213905 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-19 03:08:08.213920 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-19 03:08:08.213925 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-19 03:08:08.213930 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-19 03:08:08.213934 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-19 03:08:08.213939 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:08.213943 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:08.213948 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:08.213952 | orchestrator | 2026-02-19 03:08:08.213961 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-19 03:08:08.213966 | orchestrator | Thursday 19 February 2026 03:08:06 +0000 (0:00:54.019) 0:01:33.436 ***** 2026-02-19 03:08:08.213970 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:08:08.213975 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:08.213979 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:08.213982 | orchestrator | 2026-02-19 03:08:08.213986 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-19 03:08:08.213990 | orchestrator | Thursday 19 February 2026 03:08:07 +0000 (0:00:00.287) 0:01:33.723 ***** 2026-02-19 03:08:08.213997 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:49.247970 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:49.248064 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:49.248074 | orchestrator | 2026-02-19 03:08:49.248082 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-19 03:08:49.248090 | orchestrator | Thursday 19 February 2026 03:08:08 +0000 (0:00:01.015) 0:01:34.739 ***** 2026-02-19 03:08:49.248097 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:49.248103 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:49.248109 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:49.248115 | orchestrator | 2026-02-19 03:08:49.248121 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-19 03:08:49.248127 | orchestrator | Thursday 19 February 2026 03:08:09 +0000 (0:00:01.165) 0:01:35.904 ***** 2026-02-19 03:08:49.248133 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:49.248139 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:49.248144 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:49.248150 | orchestrator | 2026-02-19 03:08:49.248156 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-19 03:08:49.248162 | orchestrator | Thursday 19 February 2026 03:08:34 +0000 (0:00:25.538) 0:02:01.443 ***** 2026-02-19 03:08:49.248167 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:49.248174 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:49.248180 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:49.248186 | orchestrator | 2026-02-19 03:08:49.248191 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-19 03:08:49.248197 | orchestrator | Thursday 19 February 2026 03:08:35 +0000 (0:00:00.662) 0:02:02.105 ***** 2026-02-19 03:08:49.248203 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:49.248209 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:49.248215 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:49.248221 | orchestrator | 2026-02-19 03:08:49.248226 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-19 03:08:49.248232 | orchestrator | Thursday 19 February 2026 03:08:36 +0000 (0:00:00.645) 0:02:02.751 ***** 2026-02-19 03:08:49.248238 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:49.248244 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:49.248249 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:49.248255 | orchestrator | 2026-02-19 03:08:49.248261 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-19 03:08:49.248283 | orchestrator | Thursday 19 February 2026 03:08:36 +0000 (0:00:00.631) 0:02:03.383 ***** 2026-02-19 03:08:49.248290 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:49.248295 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:49.248301 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:49.248307 | orchestrator | 2026-02-19 03:08:49.248312 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-19 03:08:49.248318 | orchestrator | Thursday 19 February 2026 03:08:37 +0000 (0:00:00.809) 0:02:04.193 ***** 2026-02-19 03:08:49.248324 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:49.248329 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:49.248335 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:49.248341 | orchestrator | 2026-02-19 03:08:49.248347 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-19 03:08:49.248353 | orchestrator | Thursday 19 February 2026 03:08:37 +0000 (0:00:00.283) 0:02:04.477 ***** 2026-02-19 03:08:49.248363 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:49.248377 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:49.248389 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:49.248398 | orchestrator | 2026-02-19 03:08:49.248407 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-19 03:08:49.248417 | orchestrator | Thursday 19 February 2026 03:08:38 +0000 (0:00:00.614) 0:02:05.091 ***** 2026-02-19 03:08:49.248425 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:49.248434 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:49.248444 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:49.248454 | orchestrator | 2026-02-19 03:08:49.248463 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-19 03:08:49.248473 | orchestrator | Thursday 19 February 2026 03:08:39 +0000 (0:00:00.629) 0:02:05.721 ***** 2026-02-19 03:08:49.248482 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:49.248491 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:49.248502 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:49.248512 | orchestrator | 2026-02-19 03:08:49.248523 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-19 03:08:49.248533 | orchestrator | Thursday 19 February 2026 03:08:40 +0000 (0:00:00.873) 0:02:06.595 ***** 2026-02-19 03:08:49.248546 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:08:49.248555 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:08:49.248565 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:08:49.248575 | orchestrator | 2026-02-19 03:08:49.248584 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-19 03:08:49.248595 | orchestrator | Thursday 19 February 2026 03:08:41 +0000 (0:00:01.039) 0:02:07.635 ***** 2026-02-19 03:08:49.248605 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:08:49.248615 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:49.248648 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:49.248657 | orchestrator | 2026-02-19 03:08:49.248667 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-19 03:08:49.248678 | orchestrator | Thursday 19 February 2026 03:08:41 +0000 (0:00:00.295) 0:02:07.930 ***** 2026-02-19 03:08:49.248686 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:08:49.248693 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:08:49.248699 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:08:49.248707 | orchestrator | 2026-02-19 03:08:49.248713 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-19 03:08:49.248723 | orchestrator | Thursday 19 February 2026 03:08:41 +0000 (0:00:00.275) 0:02:08.205 ***** 2026-02-19 03:08:49.248733 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:49.248741 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:49.248757 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:49.248767 | orchestrator | 2026-02-19 03:08:49.248777 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-19 03:08:49.248787 | orchestrator | Thursday 19 February 2026 03:08:42 +0000 (0:00:00.616) 0:02:08.822 ***** 2026-02-19 03:08:49.248806 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:08:49.248815 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:08:49.248843 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:08:49.248854 | orchestrator | 2026-02-19 03:08:49.248865 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-19 03:08:49.248878 | orchestrator | Thursday 19 February 2026 03:08:43 +0000 (0:00:00.831) 0:02:09.654 ***** 2026-02-19 03:08:49.248887 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-19 03:08:49.248898 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-19 03:08:49.248905 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-19 03:08:49.248911 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-19 03:08:49.248917 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-19 03:08:49.248922 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-19 03:08:49.248929 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-19 03:08:49.248935 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-19 03:08:49.248941 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-19 03:08:49.248947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-19 03:08:49.248953 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-19 03:08:49.248958 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-19 03:08:49.248964 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-19 03:08:49.248970 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-19 03:08:49.248976 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-19 03:08:49.248981 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-19 03:08:49.248987 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-19 03:08:49.248993 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-19 03:08:49.248998 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-19 03:08:49.249004 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-19 03:08:49.249010 | orchestrator | 2026-02-19 03:08:49.249016 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-19 03:08:49.249021 | orchestrator | 2026-02-19 03:08:49.249027 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-19 03:08:49.249033 | orchestrator | Thursday 19 February 2026 03:08:46 +0000 (0:00:03.278) 0:02:12.932 ***** 2026-02-19 03:08:49.249039 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:08:49.249044 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:08:49.249050 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:08:49.249056 | orchestrator | 2026-02-19 03:08:49.249076 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-19 03:08:49.249082 | orchestrator | Thursday 19 February 2026 03:08:46 +0000 (0:00:00.302) 0:02:13.234 ***** 2026-02-19 03:08:49.249088 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:08:49.249093 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:08:49.249099 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:08:49.249110 | orchestrator | 2026-02-19 03:08:49.249116 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-19 03:08:49.249122 | orchestrator | Thursday 19 February 2026 03:08:47 +0000 (0:00:00.826) 0:02:14.061 ***** 2026-02-19 03:08:49.249127 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:08:49.249133 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:08:49.249139 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:08:49.249144 | orchestrator | 2026-02-19 03:08:49.249150 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-19 03:08:49.249156 | orchestrator | Thursday 19 February 2026 03:08:47 +0000 (0:00:00.324) 0:02:14.386 ***** 2026-02-19 03:08:49.249161 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:08:49.249167 | orchestrator | 2026-02-19 03:08:49.249173 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-19 03:08:49.249179 | orchestrator | Thursday 19 February 2026 03:08:48 +0000 (0:00:00.459) 0:02:14.845 ***** 2026-02-19 03:08:49.249185 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:08:49.249191 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:08:49.249197 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:08:49.249202 | orchestrator | 2026-02-19 03:08:49.249208 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-19 03:08:49.249214 | orchestrator | Thursday 19 February 2026 03:08:48 +0000 (0:00:00.461) 0:02:15.306 ***** 2026-02-19 03:08:49.249219 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:08:49.249225 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:08:49.249231 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:08:49.249237 | orchestrator | 2026-02-19 03:08:49.249242 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-19 03:08:49.249248 | orchestrator | Thursday 19 February 2026 03:08:49 +0000 (0:00:00.302) 0:02:15.609 ***** 2026-02-19 03:08:49.249258 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:10:26.052106 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:10:26.052216 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:10:26.052227 | orchestrator | 2026-02-19 03:10:26.052238 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-19 03:10:26.052248 | orchestrator | Thursday 19 February 2026 03:08:49 +0000 (0:00:00.299) 0:02:15.909 ***** 2026-02-19 03:10:26.052256 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:10:26.052264 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:10:26.052273 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:10:26.052282 | orchestrator | 2026-02-19 03:10:26.052291 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-19 03:10:26.052298 | orchestrator | Thursday 19 February 2026 03:08:49 +0000 (0:00:00.621) 0:02:16.530 ***** 2026-02-19 03:10:26.052306 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:10:26.052314 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:10:26.052321 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:10:26.052329 | orchestrator | 2026-02-19 03:10:26.052337 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-19 03:10:26.052345 | orchestrator | Thursday 19 February 2026 03:08:51 +0000 (0:00:01.381) 0:02:17.912 ***** 2026-02-19 03:10:26.052353 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:10:26.052361 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:10:26.052368 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:10:26.052375 | orchestrator | 2026-02-19 03:10:26.052383 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-19 03:10:26.052391 | orchestrator | Thursday 19 February 2026 03:08:52 +0000 (0:00:01.256) 0:02:19.168 ***** 2026-02-19 03:10:26.052398 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:10:26.052406 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:10:26.052414 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:10:26.052422 | orchestrator | 2026-02-19 03:10:26.052429 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-19 03:10:26.052459 | orchestrator | 2026-02-19 03:10:26.052468 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-19 03:10:26.052476 | orchestrator | Thursday 19 February 2026 03:09:02 +0000 (0:00:10.092) 0:02:29.260 ***** 2026-02-19 03:10:26.052484 | orchestrator | ok: [testbed-manager] 2026-02-19 03:10:26.052493 | orchestrator | 2026-02-19 03:10:26.052500 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-19 03:10:26.052508 | orchestrator | Thursday 19 February 2026 03:09:03 +0000 (0:00:00.861) 0:02:30.121 ***** 2026-02-19 03:10:26.052515 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:26.052522 | orchestrator | 2026-02-19 03:10:26.052529 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-19 03:10:26.052537 | orchestrator | Thursday 19 February 2026 03:09:04 +0000 (0:00:00.641) 0:02:30.763 ***** 2026-02-19 03:10:26.052544 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-19 03:10:26.052552 | orchestrator | 2026-02-19 03:10:26.052559 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-19 03:10:26.052567 | orchestrator | Thursday 19 February 2026 03:09:04 +0000 (0:00:00.551) 0:02:31.314 ***** 2026-02-19 03:10:26.052574 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:26.052582 | orchestrator | 2026-02-19 03:10:26.052591 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-19 03:10:26.052600 | orchestrator | Thursday 19 February 2026 03:09:05 +0000 (0:00:00.894) 0:02:32.209 ***** 2026-02-19 03:10:26.052609 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:26.052618 | orchestrator | 2026-02-19 03:10:26.052628 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-19 03:10:26.052660 | orchestrator | Thursday 19 February 2026 03:09:06 +0000 (0:00:00.584) 0:02:32.794 ***** 2026-02-19 03:10:26.052669 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-19 03:10:26.052676 | orchestrator | 2026-02-19 03:10:26.052683 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-19 03:10:26.052690 | orchestrator | Thursday 19 February 2026 03:09:07 +0000 (0:00:01.505) 0:02:34.299 ***** 2026-02-19 03:10:26.052696 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-19 03:10:26.052705 | orchestrator | 2026-02-19 03:10:26.052731 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-19 03:10:26.052744 | orchestrator | Thursday 19 February 2026 03:09:08 +0000 (0:00:00.811) 0:02:35.111 ***** 2026-02-19 03:10:26.052751 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:26.052759 | orchestrator | 2026-02-19 03:10:26.052766 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-19 03:10:26.052773 | orchestrator | Thursday 19 February 2026 03:09:09 +0000 (0:00:00.437) 0:02:35.549 ***** 2026-02-19 03:10:26.052781 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:26.052790 | orchestrator | 2026-02-19 03:10:26.052800 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-19 03:10:26.052809 | orchestrator | 2026-02-19 03:10:26.052816 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-19 03:10:26.052824 | orchestrator | Thursday 19 February 2026 03:09:09 +0000 (0:00:00.436) 0:02:35.985 ***** 2026-02-19 03:10:26.052832 | orchestrator | ok: [testbed-manager] 2026-02-19 03:10:26.052839 | orchestrator | 2026-02-19 03:10:26.052846 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-19 03:10:26.052853 | orchestrator | Thursday 19 February 2026 03:09:09 +0000 (0:00:00.148) 0:02:36.134 ***** 2026-02-19 03:10:26.052859 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 03:10:26.052866 | orchestrator | 2026-02-19 03:10:26.052875 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-19 03:10:26.052882 | orchestrator | Thursday 19 February 2026 03:09:10 +0000 (0:00:00.408) 0:02:36.543 ***** 2026-02-19 03:10:26.052888 | orchestrator | ok: [testbed-manager] 2026-02-19 03:10:26.052895 | orchestrator | 2026-02-19 03:10:26.052912 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-19 03:10:26.052919 | orchestrator | Thursday 19 February 2026 03:09:10 +0000 (0:00:00.849) 0:02:37.393 ***** 2026-02-19 03:10:26.052925 | orchestrator | ok: [testbed-manager] 2026-02-19 03:10:26.052932 | orchestrator | 2026-02-19 03:10:26.052958 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-19 03:10:26.052965 | orchestrator | Thursday 19 February 2026 03:09:12 +0000 (0:00:01.569) 0:02:38.963 ***** 2026-02-19 03:10:26.052972 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:26.052978 | orchestrator | 2026-02-19 03:10:26.052985 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-19 03:10:26.052992 | orchestrator | Thursday 19 February 2026 03:09:13 +0000 (0:00:00.793) 0:02:39.756 ***** 2026-02-19 03:10:26.052999 | orchestrator | ok: [testbed-manager] 2026-02-19 03:10:26.053005 | orchestrator | 2026-02-19 03:10:26.053013 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-19 03:10:26.053020 | orchestrator | Thursday 19 February 2026 03:09:13 +0000 (0:00:00.436) 0:02:40.193 ***** 2026-02-19 03:10:26.053028 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:26.053034 | orchestrator | 2026-02-19 03:10:26.053041 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-19 03:10:26.053048 | orchestrator | Thursday 19 February 2026 03:09:20 +0000 (0:00:07.156) 0:02:47.349 ***** 2026-02-19 03:10:26.053055 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:26.053062 | orchestrator | 2026-02-19 03:10:26.053068 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-19 03:10:26.053076 | orchestrator | Thursday 19 February 2026 03:09:32 +0000 (0:00:11.651) 0:02:59.001 ***** 2026-02-19 03:10:26.053083 | orchestrator | ok: [testbed-manager] 2026-02-19 03:10:26.053090 | orchestrator | 2026-02-19 03:10:26.053096 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-19 03:10:26.053103 | orchestrator | 2026-02-19 03:10:26.053110 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-19 03:10:26.053118 | orchestrator | Thursday 19 February 2026 03:09:33 +0000 (0:00:00.611) 0:02:59.613 ***** 2026-02-19 03:10:26.053126 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:10:26.053133 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:10:26.053141 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:10:26.053148 | orchestrator | 2026-02-19 03:10:26.053155 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-19 03:10:26.053162 | orchestrator | Thursday 19 February 2026 03:09:33 +0000 (0:00:00.266) 0:02:59.879 ***** 2026-02-19 03:10:26.053170 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:26.053177 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:10:26.053185 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:10:26.053193 | orchestrator | 2026-02-19 03:10:26.053200 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-19 03:10:26.053207 | orchestrator | Thursday 19 February 2026 03:09:33 +0000 (0:00:00.265) 0:03:00.145 ***** 2026-02-19 03:10:26.053214 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:10:26.053221 | orchestrator | 2026-02-19 03:10:26.053229 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-19 03:10:26.053236 | orchestrator | Thursday 19 February 2026 03:09:34 +0000 (0:00:00.550) 0:03:00.695 ***** 2026-02-19 03:10:26.053243 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-19 03:10:26.053250 | orchestrator | 2026-02-19 03:10:26.053257 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-19 03:10:26.053264 | orchestrator | Thursday 19 February 2026 03:09:34 +0000 (0:00:00.725) 0:03:01.421 ***** 2026-02-19 03:10:26.053271 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 03:10:26.053278 | orchestrator | 2026-02-19 03:10:26.053285 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-19 03:10:26.053302 | orchestrator | Thursday 19 February 2026 03:09:35 +0000 (0:00:00.727) 0:03:02.148 ***** 2026-02-19 03:10:26.053308 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:26.053316 | orchestrator | 2026-02-19 03:10:26.053322 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-19 03:10:26.053329 | orchestrator | Thursday 19 February 2026 03:09:35 +0000 (0:00:00.098) 0:03:02.247 ***** 2026-02-19 03:10:26.053336 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 03:10:26.053342 | orchestrator | 2026-02-19 03:10:26.053349 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-19 03:10:26.053356 | orchestrator | Thursday 19 February 2026 03:09:36 +0000 (0:00:00.872) 0:03:03.120 ***** 2026-02-19 03:10:26.053363 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:26.053370 | orchestrator | 2026-02-19 03:10:26.053377 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-19 03:10:26.053384 | orchestrator | Thursday 19 February 2026 03:09:36 +0000 (0:00:00.106) 0:03:03.226 ***** 2026-02-19 03:10:26.053391 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:26.053398 | orchestrator | 2026-02-19 03:10:26.053405 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-19 03:10:26.053412 | orchestrator | Thursday 19 February 2026 03:09:36 +0000 (0:00:00.106) 0:03:03.333 ***** 2026-02-19 03:10:26.053419 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:26.053426 | orchestrator | 2026-02-19 03:10:26.053433 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-19 03:10:26.053449 | orchestrator | Thursday 19 February 2026 03:09:36 +0000 (0:00:00.121) 0:03:03.454 ***** 2026-02-19 03:10:26.053457 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:26.053464 | orchestrator | 2026-02-19 03:10:26.053471 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-19 03:10:26.053478 | orchestrator | Thursday 19 February 2026 03:09:37 +0000 (0:00:00.101) 0:03:03.555 ***** 2026-02-19 03:10:26.053485 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-19 03:10:26.053492 | orchestrator | 2026-02-19 03:10:26.053499 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-19 03:10:26.053506 | orchestrator | Thursday 19 February 2026 03:09:41 +0000 (0:00:04.555) 0:03:08.111 ***** 2026-02-19 03:10:26.053514 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-19 03:10:26.053521 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-19 03:10:26.053543 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-19 03:10:47.255538 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-19 03:10:47.255627 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-19 03:10:47.255637 | orchestrator | 2026-02-19 03:10:47.255686 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-19 03:10:47.255694 | orchestrator | Thursday 19 February 2026 03:10:26 +0000 (0:00:44.460) 0:03:52.571 ***** 2026-02-19 03:10:47.255700 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 03:10:47.255707 | orchestrator | 2026-02-19 03:10:47.255714 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-19 03:10:47.255720 | orchestrator | Thursday 19 February 2026 03:10:27 +0000 (0:00:01.213) 0:03:53.784 ***** 2026-02-19 03:10:47.255727 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-19 03:10:47.255733 | orchestrator | 2026-02-19 03:10:47.255739 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-19 03:10:47.255745 | orchestrator | Thursday 19 February 2026 03:10:28 +0000 (0:00:01.535) 0:03:55.320 ***** 2026-02-19 03:10:47.255751 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-19 03:10:47.255757 | orchestrator | 2026-02-19 03:10:47.255763 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-19 03:10:47.255770 | orchestrator | Thursday 19 February 2026 03:10:30 +0000 (0:00:01.244) 0:03:56.564 ***** 2026-02-19 03:10:47.255796 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:47.255802 | orchestrator | 2026-02-19 03:10:47.255808 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-19 03:10:47.255815 | orchestrator | Thursday 19 February 2026 03:10:30 +0000 (0:00:00.112) 0:03:56.676 ***** 2026-02-19 03:10:47.255821 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-19 03:10:47.255828 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-19 03:10:47.255834 | orchestrator | 2026-02-19 03:10:47.255840 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-19 03:10:47.255846 | orchestrator | Thursday 19 February 2026 03:10:31 +0000 (0:00:01.775) 0:03:58.451 ***** 2026-02-19 03:10:47.255852 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:47.255858 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:10:47.255865 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:10:47.255871 | orchestrator | 2026-02-19 03:10:47.255877 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-19 03:10:47.255883 | orchestrator | Thursday 19 February 2026 03:10:32 +0000 (0:00:00.275) 0:03:58.727 ***** 2026-02-19 03:10:47.255889 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:10:47.255895 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:10:47.255902 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:10:47.255912 | orchestrator | 2026-02-19 03:10:47.255922 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-19 03:10:47.255935 | orchestrator | 2026-02-19 03:10:47.255951 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-19 03:10:47.255960 | orchestrator | Thursday 19 February 2026 03:10:33 +0000 (0:00:00.818) 0:03:59.545 ***** 2026-02-19 03:10:47.255970 | orchestrator | ok: [testbed-manager] 2026-02-19 03:10:47.255980 | orchestrator | 2026-02-19 03:10:47.255990 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-19 03:10:47.256000 | orchestrator | Thursday 19 February 2026 03:10:33 +0000 (0:00:00.322) 0:03:59.868 ***** 2026-02-19 03:10:47.256009 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 03:10:47.256019 | orchestrator | 2026-02-19 03:10:47.256030 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-19 03:10:47.256040 | orchestrator | Thursday 19 February 2026 03:10:33 +0000 (0:00:00.234) 0:04:00.102 ***** 2026-02-19 03:10:47.256050 | orchestrator | changed: [testbed-manager] 2026-02-19 03:10:47.256061 | orchestrator | 2026-02-19 03:10:47.256071 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-19 03:10:47.256082 | orchestrator | 2026-02-19 03:10:47.256092 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-19 03:10:47.256104 | orchestrator | Thursday 19 February 2026 03:10:38 +0000 (0:00:04.921) 0:04:05.024 ***** 2026-02-19 03:10:47.256111 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:10:47.256118 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:10:47.256125 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:10:47.256132 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:10:47.256140 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:10:47.256146 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:10:47.256153 | orchestrator | 2026-02-19 03:10:47.256160 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-19 03:10:47.256167 | orchestrator | Thursday 19 February 2026 03:10:39 +0000 (0:00:00.564) 0:04:05.588 ***** 2026-02-19 03:10:47.256174 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-19 03:10:47.256181 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-19 03:10:47.256188 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-19 03:10:47.256195 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-19 03:10:47.256212 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-19 03:10:47.256222 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-19 03:10:47.256232 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-19 03:10:47.256242 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-19 03:10:47.256252 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-19 03:10:47.256279 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-19 03:10:47.256292 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-19 03:10:47.256304 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-19 03:10:47.256315 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-19 03:10:47.256326 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-19 03:10:47.256337 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-19 03:10:47.256362 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-19 03:10:47.256370 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-19 03:10:47.256377 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-19 03:10:47.256385 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-19 03:10:47.256392 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-19 03:10:47.256399 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-19 03:10:47.256407 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-19 03:10:47.256414 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-19 03:10:47.256421 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-19 03:10:47.256428 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-19 03:10:47.256434 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-19 03:10:47.256440 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-19 03:10:47.256446 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-19 03:10:47.256452 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-19 03:10:47.256458 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-19 03:10:47.256464 | orchestrator | 2026-02-19 03:10:47.256471 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-19 03:10:47.256477 | orchestrator | Thursday 19 February 2026 03:10:46 +0000 (0:00:07.214) 0:04:12.802 ***** 2026-02-19 03:10:47.256483 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:10:47.256489 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:10:47.256495 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:10:47.256501 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:47.256507 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:10:47.256513 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:10:47.256520 | orchestrator | 2026-02-19 03:10:47.256526 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-19 03:10:47.256532 | orchestrator | Thursday 19 February 2026 03:10:46 +0000 (0:00:00.452) 0:04:13.254 ***** 2026-02-19 03:10:47.256538 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:10:47.256550 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:10:47.256556 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:10:47.256562 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:10:47.256568 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:10:47.256574 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:10:47.256580 | orchestrator | 2026-02-19 03:10:47.256586 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:10:47.256593 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:10:47.256602 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-19 03:10:47.256609 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-19 03:10:47.256615 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-19 03:10:47.256621 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 03:10:47.256628 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 03:10:47.256634 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 03:10:47.256640 | orchestrator | 2026-02-19 03:10:47.256722 | orchestrator | 2026-02-19 03:10:47.256729 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:10:47.256735 | orchestrator | Thursday 19 February 2026 03:10:47 +0000 (0:00:00.512) 0:04:13.767 ***** 2026-02-19 03:10:47.256748 | orchestrator | =============================================================================== 2026-02-19 03:10:47.587577 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.02s 2026-02-19 03:10:47.587778 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.46s 2026-02-19 03:10:47.587798 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.54s 2026-02-19 03:10:47.587810 | orchestrator | kubectl : Install required packages ------------------------------------ 11.65s 2026-02-19 03:10:47.587822 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.09s 2026-02-19 03:10:47.587832 | orchestrator | Manage labels ----------------------------------------------------------- 7.21s 2026-02-19 03:10:47.587843 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.16s 2026-02-19 03:10:47.587854 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.47s 2026-02-19 03:10:47.587865 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.92s 2026-02-19 03:10:47.587875 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.56s 2026-02-19 03:10:47.587887 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.28s 2026-02-19 03:10:47.587899 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.78s 2026-02-19 03:10:47.587910 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.60s 2026-02-19 03:10:47.587921 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.78s 2026-02-19 03:10:47.587932 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.78s 2026-02-19 03:10:47.587943 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.76s 2026-02-19 03:10:47.587953 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.57s 2026-02-19 03:10:47.588027 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.54s 2026-02-19 03:10:47.588039 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2026-02-19 03:10:47.588049 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 1.38s 2026-02-19 03:10:47.869820 | orchestrator | + osism apply copy-kubeconfig 2026-02-19 03:10:59.970561 | orchestrator | 2026-02-19 03:10:59 | INFO  | Task d8759006-979c-493e-ab3f-22167efd64b2 (copy-kubeconfig) was prepared for execution. 2026-02-19 03:10:59.970775 | orchestrator | 2026-02-19 03:10:59 | INFO  | It takes a moment until task d8759006-979c-493e-ab3f-22167efd64b2 (copy-kubeconfig) has been started and output is visible here. 2026-02-19 03:11:06.691450 | orchestrator | 2026-02-19 03:11:06.691554 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-19 03:11:06.691568 | orchestrator | 2026-02-19 03:11:06.691578 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-19 03:11:06.691587 | orchestrator | Thursday 19 February 2026 03:11:04 +0000 (0:00:00.154) 0:00:00.154 ***** 2026-02-19 03:11:06.691596 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-19 03:11:06.691606 | orchestrator | 2026-02-19 03:11:06.691622 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-19 03:11:06.691644 | orchestrator | Thursday 19 February 2026 03:11:04 +0000 (0:00:00.743) 0:00:00.897 ***** 2026-02-19 03:11:06.691748 | orchestrator | changed: [testbed-manager] 2026-02-19 03:11:06.691767 | orchestrator | 2026-02-19 03:11:06.691782 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-19 03:11:06.691797 | orchestrator | Thursday 19 February 2026 03:11:05 +0000 (0:00:01.181) 0:00:02.078 ***** 2026-02-19 03:11:06.691817 | orchestrator | changed: [testbed-manager] 2026-02-19 03:11:06.691832 | orchestrator | 2026-02-19 03:11:06.691853 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:11:06.691864 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:11:06.691874 | orchestrator | 2026-02-19 03:11:06.691883 | orchestrator | 2026-02-19 03:11:06.691891 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:11:06.691900 | orchestrator | Thursday 19 February 2026 03:11:06 +0000 (0:00:00.456) 0:00:02.535 ***** 2026-02-19 03:11:06.691909 | orchestrator | =============================================================================== 2026-02-19 03:11:06.691918 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.18s 2026-02-19 03:11:06.691927 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2026-02-19 03:11:06.691935 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.46s 2026-02-19 03:11:06.970448 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-19 03:11:19.124067 | orchestrator | 2026-02-19 03:11:19 | INFO  | Task def24e55-8f96-4e8c-b198-1d2d837fc3bc (openstackclient) was prepared for execution. 2026-02-19 03:11:19.124146 | orchestrator | 2026-02-19 03:11:19 | INFO  | It takes a moment until task def24e55-8f96-4e8c-b198-1d2d837fc3bc (openstackclient) has been started and output is visible here. 2026-02-19 03:12:04.745206 | orchestrator | 2026-02-19 03:12:04.745357 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-19 03:12:04.745376 | orchestrator | 2026-02-19 03:12:04.745388 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-19 03:12:04.745399 | orchestrator | Thursday 19 February 2026 03:11:23 +0000 (0:00:00.218) 0:00:00.218 ***** 2026-02-19 03:12:04.745412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-19 03:12:04.745424 | orchestrator | 2026-02-19 03:12:04.745462 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-19 03:12:04.745474 | orchestrator | Thursday 19 February 2026 03:11:23 +0000 (0:00:00.214) 0:00:00.432 ***** 2026-02-19 03:12:04.745485 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-19 03:12:04.745496 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-19 03:12:04.745507 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-19 03:12:04.745519 | orchestrator | 2026-02-19 03:12:04.745530 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-19 03:12:04.745541 | orchestrator | Thursday 19 February 2026 03:11:24 +0000 (0:00:01.229) 0:00:01.662 ***** 2026-02-19 03:12:04.745552 | orchestrator | changed: [testbed-manager] 2026-02-19 03:12:04.745563 | orchestrator | 2026-02-19 03:12:04.745574 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-19 03:12:04.745585 | orchestrator | Thursday 19 February 2026 03:11:26 +0000 (0:00:01.407) 0:00:03.070 ***** 2026-02-19 03:12:04.745596 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-19 03:12:04.745607 | orchestrator | ok: [testbed-manager] 2026-02-19 03:12:04.745619 | orchestrator | 2026-02-19 03:12:04.745630 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-19 03:12:04.745641 | orchestrator | Thursday 19 February 2026 03:11:59 +0000 (0:00:33.543) 0:00:36.614 ***** 2026-02-19 03:12:04.745651 | orchestrator | changed: [testbed-manager] 2026-02-19 03:12:04.745731 | orchestrator | 2026-02-19 03:12:04.745747 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-19 03:12:04.745760 | orchestrator | Thursday 19 February 2026 03:12:00 +0000 (0:00:00.940) 0:00:37.554 ***** 2026-02-19 03:12:04.745772 | orchestrator | ok: [testbed-manager] 2026-02-19 03:12:04.745785 | orchestrator | 2026-02-19 03:12:04.745798 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-19 03:12:04.745810 | orchestrator | Thursday 19 February 2026 03:12:01 +0000 (0:00:00.615) 0:00:38.170 ***** 2026-02-19 03:12:04.745823 | orchestrator | changed: [testbed-manager] 2026-02-19 03:12:04.745851 | orchestrator | 2026-02-19 03:12:04.745876 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-19 03:12:04.745889 | orchestrator | Thursday 19 February 2026 03:12:02 +0000 (0:00:01.340) 0:00:39.510 ***** 2026-02-19 03:12:04.745900 | orchestrator | changed: [testbed-manager] 2026-02-19 03:12:04.745912 | orchestrator | 2026-02-19 03:12:04.745922 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-19 03:12:04.745933 | orchestrator | Thursday 19 February 2026 03:12:03 +0000 (0:00:00.722) 0:00:40.233 ***** 2026-02-19 03:12:04.745944 | orchestrator | changed: [testbed-manager] 2026-02-19 03:12:04.745955 | orchestrator | 2026-02-19 03:12:04.745965 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-19 03:12:04.745976 | orchestrator | Thursday 19 February 2026 03:12:03 +0000 (0:00:00.588) 0:00:40.822 ***** 2026-02-19 03:12:04.745987 | orchestrator | ok: [testbed-manager] 2026-02-19 03:12:04.745998 | orchestrator | 2026-02-19 03:12:04.746008 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:12:04.746081 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:12:04.746095 | orchestrator | 2026-02-19 03:12:04.746106 | orchestrator | 2026-02-19 03:12:04.746117 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:12:04.746128 | orchestrator | Thursday 19 February 2026 03:12:04 +0000 (0:00:00.412) 0:00:41.235 ***** 2026-02-19 03:12:04.746139 | orchestrator | =============================================================================== 2026-02-19 03:12:04.746149 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.54s 2026-02-19 03:12:04.746160 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.41s 2026-02-19 03:12:04.746181 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.34s 2026-02-19 03:12:04.746192 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.23s 2026-02-19 03:12:04.746203 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.94s 2026-02-19 03:12:04.746213 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.72s 2026-02-19 03:12:04.746224 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.62s 2026-02-19 03:12:04.746235 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.59s 2026-02-19 03:12:04.746245 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.41s 2026-02-19 03:12:04.746256 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.21s 2026-02-19 03:12:07.010933 | orchestrator | 2026-02-19 03:12:07 | INFO  | Task 44a44246-0f8f-43bc-b039-d0d0987729a3 (common) was prepared for execution. 2026-02-19 03:12:07.011024 | orchestrator | 2026-02-19 03:12:07 | INFO  | It takes a moment until task 44a44246-0f8f-43bc-b039-d0d0987729a3 (common) has been started and output is visible here. 2026-02-19 03:12:18.309136 | orchestrator | 2026-02-19 03:12:18.309256 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-19 03:12:18.309271 | orchestrator | 2026-02-19 03:12:18.309282 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-19 03:12:18.309293 | orchestrator | Thursday 19 February 2026 03:12:11 +0000 (0:00:00.240) 0:00:00.240 ***** 2026-02-19 03:12:18.309304 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:12:18.309316 | orchestrator | 2026-02-19 03:12:18.309326 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-19 03:12:18.309336 | orchestrator | Thursday 19 February 2026 03:12:12 +0000 (0:00:01.099) 0:00:01.339 ***** 2026-02-19 03:12:18.309346 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 03:12:18.309356 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 03:12:18.309367 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 03:12:18.309377 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 03:12:18.309386 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 03:12:18.309396 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 03:12:18.309406 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 03:12:18.309417 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 03:12:18.309426 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 03:12:18.309456 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 03:12:18.309467 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 03:12:18.309476 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 03:12:18.309486 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 03:12:18.309499 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 03:12:18.309509 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 03:12:18.309518 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 03:12:18.309528 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 03:12:18.309560 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 03:12:18.309571 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 03:12:18.309581 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 03:12:18.309591 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 03:12:18.309600 | orchestrator | 2026-02-19 03:12:18.309610 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-19 03:12:18.309620 | orchestrator | Thursday 19 February 2026 03:12:14 +0000 (0:00:02.352) 0:00:03.691 ***** 2026-02-19 03:12:18.309630 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:12:18.309640 | orchestrator | 2026-02-19 03:12:18.309650 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-19 03:12:18.309664 | orchestrator | Thursday 19 February 2026 03:12:15 +0000 (0:00:01.144) 0:00:04.836 ***** 2026-02-19 03:12:18.309709 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:18.309722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:18.309759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:18.309772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:18.309782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:18.309793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:18.309811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:18.309821 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:18.309832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:18.309857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618461 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:19.618538 | orchestrator | 2026-02-19 03:12:19.618543 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-19 03:12:19.618548 | orchestrator | Thursday 19 February 2026 03:12:19 +0000 (0:00:03.704) 0:00:08.540 ***** 2026-02-19 03:12:19.618553 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:19.618558 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:19.618562 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:19.618566 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:12:19.618571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:19.618581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191138 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:12:20.191186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:20.191196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191210 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:12:20.191217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:20.191227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191240 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:12:20.191269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:20.191288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191310 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:12:20.191322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:20.191333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:20.191354 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:12:20.191364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:20.191376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.038957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040172 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:12:21.040258 | orchestrator | 2026-02-19 03:12:21.040282 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-19 03:12:21.040296 | orchestrator | Thursday 19 February 2026 03:12:20 +0000 (0:00:00.863) 0:00:09.404 ***** 2026-02-19 03:12:21.040310 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:21.040324 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040336 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040347 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:12:21.040378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:21.040394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040448 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:12:21.040507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:21.040527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040561 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:12:21.040578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:21.040596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:21.040715 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:12:21.040726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:21.040759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:25.821758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:25.821870 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:12:25.821888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:25.821901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:25.821912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:25.821933 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:12:25.821951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 03:12:25.821983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:25.821992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:25.822000 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:12:25.822009 | orchestrator | 2026-02-19 03:12:25.822069 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-19 03:12:25.822080 | orchestrator | Thursday 19 February 2026 03:12:21 +0000 (0:00:01.719) 0:00:11.124 ***** 2026-02-19 03:12:25.822090 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:12:25.822101 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:12:25.822111 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:12:25.822120 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:12:25.822171 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:12:25.822180 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:12:25.822189 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:12:25.822198 | orchestrator | 2026-02-19 03:12:25.822208 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-19 03:12:25.822218 | orchestrator | Thursday 19 February 2026 03:12:22 +0000 (0:00:00.662) 0:00:11.787 ***** 2026-02-19 03:12:25.822227 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:12:25.822236 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:12:25.822250 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:12:25.822260 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:12:25.822270 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:12:25.822279 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:12:25.822289 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:12:25.822298 | orchestrator | 2026-02-19 03:12:25.822307 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-19 03:12:25.822317 | orchestrator | Thursday 19 February 2026 03:12:23 +0000 (0:00:00.796) 0:00:12.583 ***** 2026-02-19 03:12:25.822327 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:25.822354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:25.822375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:25.822388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:25.822394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:25.822400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:25.822418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:28.827189 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827408 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:28.827543 | orchestrator | 2026-02-19 03:12:28.827556 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-19 03:12:28.827568 | orchestrator | Thursday 19 February 2026 03:12:26 +0000 (0:00:03.512) 0:00:16.095 ***** 2026-02-19 03:12:28.827578 | orchestrator | [WARNING]: Skipped 2026-02-19 03:12:28.827596 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-19 03:12:28.827615 | orchestrator | to this access issue: 2026-02-19 03:12:28.827638 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-19 03:12:28.827663 | orchestrator | directory 2026-02-19 03:12:28.827714 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 03:12:28.827734 | orchestrator | 2026-02-19 03:12:28.827751 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-19 03:12:28.827769 | orchestrator | Thursday 19 February 2026 03:12:27 +0000 (0:00:00.945) 0:00:17.041 ***** 2026-02-19 03:12:28.827786 | orchestrator | [WARNING]: Skipped 2026-02-19 03:12:28.827817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-19 03:12:38.348797 | orchestrator | to this access issue: 2026-02-19 03:12:38.348911 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-19 03:12:38.348926 | orchestrator | directory 2026-02-19 03:12:38.348937 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 03:12:38.348949 | orchestrator | 2026-02-19 03:12:38.348959 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-19 03:12:38.348970 | orchestrator | Thursday 19 February 2026 03:12:29 +0000 (0:00:01.294) 0:00:18.336 ***** 2026-02-19 03:12:38.349001 | orchestrator | [WARNING]: Skipped 2026-02-19 03:12:38.349012 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-19 03:12:38.349021 | orchestrator | to this access issue: 2026-02-19 03:12:38.349031 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-19 03:12:38.349040 | orchestrator | directory 2026-02-19 03:12:38.349050 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 03:12:38.349060 | orchestrator | 2026-02-19 03:12:38.349069 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-19 03:12:38.349079 | orchestrator | Thursday 19 February 2026 03:12:29 +0000 (0:00:00.825) 0:00:19.162 ***** 2026-02-19 03:12:38.349088 | orchestrator | [WARNING]: Skipped 2026-02-19 03:12:38.349098 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-19 03:12:38.349108 | orchestrator | to this access issue: 2026-02-19 03:12:38.349117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-19 03:12:38.349126 | orchestrator | directory 2026-02-19 03:12:38.349136 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 03:12:38.349145 | orchestrator | 2026-02-19 03:12:38.349155 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-19 03:12:38.349165 | orchestrator | Thursday 19 February 2026 03:12:30 +0000 (0:00:00.810) 0:00:19.973 ***** 2026-02-19 03:12:38.349174 | orchestrator | changed: [testbed-manager] 2026-02-19 03:12:38.349184 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:12:38.349193 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:12:38.349203 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:12:38.349212 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:12:38.349222 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:12:38.349248 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:12:38.349258 | orchestrator | 2026-02-19 03:12:38.349268 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-19 03:12:38.349278 | orchestrator | Thursday 19 February 2026 03:12:33 +0000 (0:00:02.498) 0:00:22.471 ***** 2026-02-19 03:12:38.349290 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 03:12:38.349302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 03:12:38.349313 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 03:12:38.349324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 03:12:38.349334 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 03:12:38.349346 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 03:12:38.349361 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 03:12:38.349372 | orchestrator | 2026-02-19 03:12:38.349384 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-19 03:12:38.349395 | orchestrator | Thursday 19 February 2026 03:12:35 +0000 (0:00:01.995) 0:00:24.467 ***** 2026-02-19 03:12:38.349405 | orchestrator | changed: [testbed-manager] 2026-02-19 03:12:38.349416 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:12:38.349427 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:12:38.349439 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:12:38.349449 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:12:38.349458 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:12:38.349468 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:12:38.349477 | orchestrator | 2026-02-19 03:12:38.349486 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-19 03:12:38.349503 | orchestrator | Thursday 19 February 2026 03:12:37 +0000 (0:00:01.932) 0:00:26.400 ***** 2026-02-19 03:12:38.349516 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:38.349544 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:38.349555 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:38.349566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:38.349576 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:38.349590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:38.349665 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:38.349708 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:38.349719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:38.349737 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:44.379932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:44.380021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:44.380038 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:44.380099 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:44.380113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:44.380138 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:44.380147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:12:44.380184 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:44.380195 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:44.380204 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:44.380213 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:44.380222 | orchestrator | 2026-02-19 03:12:44.380232 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-19 03:12:44.380242 | orchestrator | Thursday 19 February 2026 03:12:38 +0000 (0:00:01.605) 0:00:28.005 ***** 2026-02-19 03:12:44.380251 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 03:12:44.380273 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 03:12:44.380291 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 03:12:44.380307 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 03:12:44.380315 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 03:12:44.380324 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 03:12:44.380332 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 03:12:44.380341 | orchestrator | 2026-02-19 03:12:44.380349 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-19 03:12:44.380357 | orchestrator | Thursday 19 February 2026 03:12:40 +0000 (0:00:01.967) 0:00:29.972 ***** 2026-02-19 03:12:44.380365 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 03:12:44.380375 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 03:12:44.380384 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 03:12:44.380400 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 03:12:44.380409 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 03:12:44.380418 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 03:12:44.380427 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 03:12:44.380436 | orchestrator | 2026-02-19 03:12:44.380445 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-19 03:12:44.380455 | orchestrator | Thursday 19 February 2026 03:12:42 +0000 (0:00:01.689) 0:00:31.662 ***** 2026-02-19 03:12:44.380461 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:44.380475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:45.017252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:45.017352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:45.017390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:45.017418 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:45.017442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 03:12:45.017453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017534 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:12:45.017594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:14:00.377134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:14:00.377252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:14:00.377263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:14:00.377282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:14:00.377289 | orchestrator | 2026-02-19 03:14:00.377297 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-19 03:14:00.377305 | orchestrator | Thursday 19 February 2026 03:12:45 +0000 (0:00:02.568) 0:00:34.230 ***** 2026-02-19 03:14:00.377312 | orchestrator | changed: [testbed-manager] 2026-02-19 03:14:00.377319 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:14:00.377326 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:14:00.377332 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:14:00.377338 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:14:00.377345 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:14:00.377351 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:14:00.377357 | orchestrator | 2026-02-19 03:14:00.377363 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-19 03:14:00.377370 | orchestrator | Thursday 19 February 2026 03:12:46 +0000 (0:00:01.220) 0:00:35.451 ***** 2026-02-19 03:14:00.377376 | orchestrator | changed: [testbed-manager] 2026-02-19 03:14:00.377382 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:14:00.377388 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:14:00.377394 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:14:00.377401 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:14:00.377407 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:14:00.377413 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:14:00.377419 | orchestrator | 2026-02-19 03:14:00.377425 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 03:14:00.377431 | orchestrator | Thursday 19 February 2026 03:12:47 +0000 (0:00:00.984) 0:00:36.435 ***** 2026-02-19 03:14:00.377437 | orchestrator | 2026-02-19 03:14:00.377444 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 03:14:00.377450 | orchestrator | Thursday 19 February 2026 03:12:47 +0000 (0:00:00.058) 0:00:36.493 ***** 2026-02-19 03:14:00.377456 | orchestrator | 2026-02-19 03:14:00.377462 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 03:14:00.377468 | orchestrator | Thursday 19 February 2026 03:12:47 +0000 (0:00:00.058) 0:00:36.551 ***** 2026-02-19 03:14:00.377474 | orchestrator | 2026-02-19 03:14:00.377480 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 03:14:00.377487 | orchestrator | Thursday 19 February 2026 03:12:47 +0000 (0:00:00.058) 0:00:36.609 ***** 2026-02-19 03:14:00.377493 | orchestrator | 2026-02-19 03:14:00.377499 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 03:14:00.377511 | orchestrator | Thursday 19 February 2026 03:12:47 +0000 (0:00:00.163) 0:00:36.773 ***** 2026-02-19 03:14:00.377517 | orchestrator | 2026-02-19 03:14:00.377523 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 03:14:00.377529 | orchestrator | Thursday 19 February 2026 03:12:47 +0000 (0:00:00.070) 0:00:36.843 ***** 2026-02-19 03:14:00.377535 | orchestrator | 2026-02-19 03:14:00.377542 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 03:14:00.377548 | orchestrator | Thursday 19 February 2026 03:12:47 +0000 (0:00:00.056) 0:00:36.900 ***** 2026-02-19 03:14:00.377554 | orchestrator | 2026-02-19 03:14:00.377560 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-19 03:14:00.377566 | orchestrator | Thursday 19 February 2026 03:12:47 +0000 (0:00:00.082) 0:00:36.982 ***** 2026-02-19 03:14:00.377572 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:14:00.377579 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:14:00.377585 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:14:00.377591 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:14:00.377597 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:14:00.377615 | orchestrator | changed: [testbed-manager] 2026-02-19 03:14:00.377622 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:14:00.377628 | orchestrator | 2026-02-19 03:14:00.377635 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-19 03:14:00.377641 | orchestrator | Thursday 19 February 2026 03:13:19 +0000 (0:00:31.936) 0:01:08.918 ***** 2026-02-19 03:14:00.377647 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:14:00.377653 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:14:00.377659 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:14:00.377665 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:14:00.377671 | orchestrator | changed: [testbed-manager] 2026-02-19 03:14:00.377677 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:14:00.377683 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:14:00.377690 | orchestrator | 2026-02-19 03:14:00.377696 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-19 03:14:00.377728 | orchestrator | Thursday 19 February 2026 03:13:49 +0000 (0:00:30.176) 0:01:39.095 ***** 2026-02-19 03:14:00.377739 | orchestrator | ok: [testbed-manager] 2026-02-19 03:14:00.377751 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:14:00.377763 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:14:00.377773 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:14:00.377783 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:14:00.377789 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:14:00.377795 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:14:00.377801 | orchestrator | 2026-02-19 03:14:00.377808 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-19 03:14:00.377814 | orchestrator | Thursday 19 February 2026 03:13:51 +0000 (0:00:01.871) 0:01:40.966 ***** 2026-02-19 03:14:00.377820 | orchestrator | changed: [testbed-manager] 2026-02-19 03:14:00.377827 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:14:00.377833 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:14:00.377843 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:14:00.377854 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:14:00.377869 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:14:00.377881 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:14:00.377890 | orchestrator | 2026-02-19 03:14:00.377900 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:14:00.377910 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 03:14:00.377922 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 03:14:00.377940 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 03:14:00.377957 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 03:14:00.377967 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 03:14:00.377976 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 03:14:00.377986 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 03:14:00.377996 | orchestrator | 2026-02-19 03:14:00.378006 | orchestrator | 2026-02-19 03:14:00.378082 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:14:00.378096 | orchestrator | Thursday 19 February 2026 03:14:00 +0000 (0:00:08.601) 0:01:49.568 ***** 2026-02-19 03:14:00.378107 | orchestrator | =============================================================================== 2026-02-19 03:14:00.378118 | orchestrator | common : Restart fluentd container ------------------------------------- 31.94s 2026-02-19 03:14:00.378129 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.18s 2026-02-19 03:14:00.378140 | orchestrator | common : Restart cron container ----------------------------------------- 8.60s 2026-02-19 03:14:00.378147 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.70s 2026-02-19 03:14:00.378153 | orchestrator | common : Copying over config.json files for services -------------------- 3.51s 2026-02-19 03:14:00.378159 | orchestrator | common : Check common containers ---------------------------------------- 2.57s 2026-02-19 03:14:00.378165 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.50s 2026-02-19 03:14:00.378171 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.35s 2026-02-19 03:14:00.378178 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.00s 2026-02-19 03:14:00.378184 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.97s 2026-02-19 03:14:00.378190 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.93s 2026-02-19 03:14:00.378196 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.87s 2026-02-19 03:14:00.378202 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.72s 2026-02-19 03:14:00.378208 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.69s 2026-02-19 03:14:00.378214 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.61s 2026-02-19 03:14:00.378221 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.29s 2026-02-19 03:14:00.378236 | orchestrator | common : Creating log volume -------------------------------------------- 1.22s 2026-02-19 03:14:00.787836 | orchestrator | common : include_tasks -------------------------------------------------- 1.14s 2026-02-19 03:14:00.787949 | orchestrator | common : include_tasks -------------------------------------------------- 1.10s 2026-02-19 03:14:00.787965 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 0.98s 2026-02-19 03:14:03.052547 | orchestrator | 2026-02-19 03:14:03 | INFO  | Task 08e79bb5-33fa-4963-8eda-6174f99febab (loadbalancer) was prepared for execution. 2026-02-19 03:14:03.052620 | orchestrator | 2026-02-19 03:14:03 | INFO  | It takes a moment until task 08e79bb5-33fa-4963-8eda-6174f99febab (loadbalancer) has been started and output is visible here. 2026-02-19 03:14:18.839978 | orchestrator | 2026-02-19 03:14:18.840097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:14:18.840115 | orchestrator | 2026-02-19 03:14:18.840127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:14:18.840193 | orchestrator | Thursday 19 February 2026 03:14:07 +0000 (0:00:00.237) 0:00:00.237 ***** 2026-02-19 03:14:18.840235 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:14:18.840249 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:14:18.840261 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:14:18.840272 | orchestrator | 2026-02-19 03:14:18.840283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:14:18.840294 | orchestrator | Thursday 19 February 2026 03:14:07 +0000 (0:00:00.290) 0:00:00.527 ***** 2026-02-19 03:14:18.840306 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-19 03:14:18.840317 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-19 03:14:18.840327 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-19 03:14:18.840338 | orchestrator | 2026-02-19 03:14:18.840349 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-19 03:14:18.840359 | orchestrator | 2026-02-19 03:14:18.840370 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-19 03:14:18.840395 | orchestrator | Thursday 19 February 2026 03:14:07 +0000 (0:00:00.408) 0:00:00.935 ***** 2026-02-19 03:14:18.840408 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:14:18.840419 | orchestrator | 2026-02-19 03:14:18.840430 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-19 03:14:18.840441 | orchestrator | Thursday 19 February 2026 03:14:08 +0000 (0:00:00.548) 0:00:01.484 ***** 2026-02-19 03:14:18.840451 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:14:18.840462 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:14:18.840473 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:14:18.840484 | orchestrator | 2026-02-19 03:14:18.840495 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-19 03:14:18.840507 | orchestrator | Thursday 19 February 2026 03:14:09 +0000 (0:00:01.600) 0:00:03.085 ***** 2026-02-19 03:14:18.840521 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:14:18.840534 | orchestrator | 2026-02-19 03:14:18.840546 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-19 03:14:18.840560 | orchestrator | Thursday 19 February 2026 03:14:10 +0000 (0:00:00.670) 0:00:03.756 ***** 2026-02-19 03:14:18.840579 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:14:18.840607 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:14:18.840630 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:14:18.840650 | orchestrator | 2026-02-19 03:14:18.840669 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-19 03:14:18.840689 | orchestrator | Thursday 19 February 2026 03:14:11 +0000 (0:00:00.602) 0:00:04.358 ***** 2026-02-19 03:14:18.840734 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-19 03:14:18.840756 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-19 03:14:18.840775 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-19 03:14:18.840792 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-19 03:14:18.840808 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-19 03:14:18.840826 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-19 03:14:18.840843 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-19 03:14:18.840860 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-19 03:14:18.840878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-19 03:14:18.840897 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-19 03:14:18.840931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-19 03:14:18.840951 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-19 03:14:18.840969 | orchestrator | 2026-02-19 03:14:18.840987 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-19 03:14:18.841005 | orchestrator | Thursday 19 February 2026 03:14:14 +0000 (0:00:03.269) 0:00:07.627 ***** 2026-02-19 03:14:18.841017 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-19 03:14:18.841028 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-19 03:14:18.841039 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-19 03:14:18.841050 | orchestrator | 2026-02-19 03:14:18.841061 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-19 03:14:18.841072 | orchestrator | Thursday 19 February 2026 03:14:15 +0000 (0:00:00.692) 0:00:08.320 ***** 2026-02-19 03:14:18.841083 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-19 03:14:18.841094 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-19 03:14:18.841104 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-19 03:14:18.841115 | orchestrator | 2026-02-19 03:14:18.841126 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-19 03:14:18.841136 | orchestrator | Thursday 19 February 2026 03:14:16 +0000 (0:00:01.292) 0:00:09.612 ***** 2026-02-19 03:14:18.841147 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-19 03:14:18.841158 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:18.841191 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-19 03:14:18.841202 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:18.841213 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-19 03:14:18.841224 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:18.841234 | orchestrator | 2026-02-19 03:14:18.841245 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-19 03:14:18.841256 | orchestrator | Thursday 19 February 2026 03:14:16 +0000 (0:00:00.489) 0:00:10.102 ***** 2026-02-19 03:14:18.841278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:18.841296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:18.841308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:18.841328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:18.841339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:18.841360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:24.176266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:24.176353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:24.176367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:24.176379 | orchestrator | 2026-02-19 03:14:24.176393 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-19 03:14:24.176408 | orchestrator | Thursday 19 February 2026 03:14:18 +0000 (0:00:01.894) 0:00:11.997 ***** 2026-02-19 03:14:24.176418 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:14:24.176449 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:14:24.176460 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:14:24.176470 | orchestrator | 2026-02-19 03:14:24.176479 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-19 03:14:24.176489 | orchestrator | Thursday 19 February 2026 03:14:19 +0000 (0:00:00.963) 0:00:12.961 ***** 2026-02-19 03:14:24.176498 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-19 03:14:24.176508 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-19 03:14:24.176517 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-19 03:14:24.176527 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-19 03:14:24.176537 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-19 03:14:24.176546 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-19 03:14:24.176555 | orchestrator | 2026-02-19 03:14:24.176565 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-19 03:14:24.176574 | orchestrator | Thursday 19 February 2026 03:14:21 +0000 (0:00:01.485) 0:00:14.446 ***** 2026-02-19 03:14:24.176585 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:14:24.176596 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:14:24.176606 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:14:24.176612 | orchestrator | 2026-02-19 03:14:24.176618 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-19 03:14:24.176623 | orchestrator | Thursday 19 February 2026 03:14:22 +0000 (0:00:00.920) 0:00:15.366 ***** 2026-02-19 03:14:24.176629 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:14:24.176635 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:14:24.176641 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:14:24.176647 | orchestrator | 2026-02-19 03:14:24.176653 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-19 03:14:24.176658 | orchestrator | Thursday 19 February 2026 03:14:23 +0000 (0:00:01.324) 0:00:16.691 ***** 2026-02-19 03:14:24.176665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:24.176688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:24.176695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:24.176702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 03:14:24.176784 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:24.176793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:24.176837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:24.176850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:24.176860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 03:14:24.176871 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:24.176891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:26.952235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:26.952371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:26.952388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 03:14:26.952401 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:26.952415 | orchestrator | 2026-02-19 03:14:26.952428 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-19 03:14:26.952440 | orchestrator | Thursday 19 February 2026 03:14:24 +0000 (0:00:00.645) 0:00:17.337 ***** 2026-02-19 03:14:26.952451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:26.952463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:26.952474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:26.952528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:26.952542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:26.952553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 03:14:26.952565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:26.952576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:26.952587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 03:14:26.952628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:35.343311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:35.343447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89', '__omit_place_holder__431531ba0609765fa72ade3ee4766ba5721f8e89'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 03:14:35.343467 | orchestrator | 2026-02-19 03:14:35.343478 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-19 03:14:35.343493 | orchestrator | Thursday 19 February 2026 03:14:26 +0000 (0:00:02.776) 0:00:20.114 ***** 2026-02-19 03:14:35.343509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:35.343528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:35.343545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:35.343593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:35.343649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:35.343669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:35.343685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:35.343702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:35.343812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:35.343847 | orchestrator | 2026-02-19 03:14:35.343859 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-19 03:14:35.343869 | orchestrator | Thursday 19 February 2026 03:14:30 +0000 (0:00:03.279) 0:00:23.393 ***** 2026-02-19 03:14:35.343894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-19 03:14:35.343912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-19 03:14:35.343927 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-19 03:14:35.343944 | orchestrator | 2026-02-19 03:14:35.343955 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-19 03:14:35.343966 | orchestrator | Thursday 19 February 2026 03:14:32 +0000 (0:00:01.853) 0:00:25.247 ***** 2026-02-19 03:14:35.343980 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-19 03:14:35.343996 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-19 03:14:35.344010 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-19 03:14:35.344020 | orchestrator | 2026-02-19 03:14:35.344030 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-19 03:14:35.344041 | orchestrator | Thursday 19 February 2026 03:14:34 +0000 (0:00:02.707) 0:00:27.954 ***** 2026-02-19 03:14:35.344052 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:35.344068 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:35.344082 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:35.344093 | orchestrator | 2026-02-19 03:14:35.344120 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-19 03:14:46.686065 | orchestrator | Thursday 19 February 2026 03:14:35 +0000 (0:00:00.554) 0:00:28.509 ***** 2026-02-19 03:14:46.686149 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-19 03:14:46.686169 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-19 03:14:46.686175 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-19 03:14:46.686181 | orchestrator | 2026-02-19 03:14:46.686188 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-19 03:14:46.686194 | orchestrator | Thursday 19 February 2026 03:14:37 +0000 (0:00:02.012) 0:00:30.521 ***** 2026-02-19 03:14:46.686201 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-19 03:14:46.686207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-19 03:14:46.686212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-19 03:14:46.686218 | orchestrator | 2026-02-19 03:14:46.686224 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-19 03:14:46.686229 | orchestrator | Thursday 19 February 2026 03:14:39 +0000 (0:00:02.083) 0:00:32.605 ***** 2026-02-19 03:14:46.686236 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-19 03:14:46.686242 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-19 03:14:46.686248 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-19 03:14:46.686253 | orchestrator | 2026-02-19 03:14:46.686268 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-19 03:14:46.686274 | orchestrator | Thursday 19 February 2026 03:14:40 +0000 (0:00:01.359) 0:00:33.964 ***** 2026-02-19 03:14:46.686280 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-19 03:14:46.686286 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-19 03:14:46.686291 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-19 03:14:46.686297 | orchestrator | 2026-02-19 03:14:46.686316 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-19 03:14:46.686322 | orchestrator | Thursday 19 February 2026 03:14:42 +0000 (0:00:01.419) 0:00:35.384 ***** 2026-02-19 03:14:46.686328 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:14:46.686333 | orchestrator | 2026-02-19 03:14:46.686339 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-19 03:14:46.686344 | orchestrator | Thursday 19 February 2026 03:14:42 +0000 (0:00:00.519) 0:00:35.903 ***** 2026-02-19 03:14:46.686351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:46.686359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:46.686369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:46.686390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:46.686397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:46.686403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:14:46.686414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:46.686421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:46.686426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:14:46.686432 | orchestrator | 2026-02-19 03:14:46.686438 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-19 03:14:46.686443 | orchestrator | Thursday 19 February 2026 03:14:46 +0000 (0:00:03.385) 0:00:39.289 ***** 2026-02-19 03:14:46.686457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:47.459021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:47.459133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:47.459174 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:47.459191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:47.459203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:47.459215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:47.459226 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:47.459237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:47.459283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:47.459296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:47.459315 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:47.459326 | orchestrator | 2026-02-19 03:14:47.459338 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-19 03:14:47.459350 | orchestrator | Thursday 19 February 2026 03:14:46 +0000 (0:00:00.560) 0:00:39.850 ***** 2026-02-19 03:14:47.459362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:47.459373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:47.459385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:47.459398 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:47.459419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:47.459455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:48.278541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:48.278663 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:48.278679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:48.278693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:48.278703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:48.278713 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:48.278870 | orchestrator | 2026-02-19 03:14:48.278894 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-19 03:14:48.278913 | orchestrator | Thursday 19 February 2026 03:14:47 +0000 (0:00:00.773) 0:00:40.623 ***** 2026-02-19 03:14:48.278928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:48.278946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:48.278987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:48.279020 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:48.279038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:48.279056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:48.279074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:48.279088 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:48.279100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:48.279128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:48.279145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:48.279173 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:49.618688 | orchestrator | 2026-02-19 03:14:49.618812 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-19 03:14:49.618827 | orchestrator | Thursday 19 February 2026 03:14:48 +0000 (0:00:00.809) 0:00:41.432 ***** 2026-02-19 03:14:49.618842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:49.618856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:49.618867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:49.618878 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:49.618889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:49.618900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:49.618936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:49.618967 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:49.618996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:49.619007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:49.619017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:49.619027 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:49.619037 | orchestrator | 2026-02-19 03:14:49.619047 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-19 03:14:49.619057 | orchestrator | Thursday 19 February 2026 03:14:48 +0000 (0:00:00.576) 0:00:42.009 ***** 2026-02-19 03:14:49.619067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:49.619077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:49.619106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:49.619116 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:49.619134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:50.611396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:50.611486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:50.611496 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:50.611504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:50.611510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:50.611516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:50.611539 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:50.611545 | orchestrator | 2026-02-19 03:14:50.611552 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-19 03:14:50.611558 | orchestrator | Thursday 19 February 2026 03:14:49 +0000 (0:00:00.774) 0:00:42.784 ***** 2026-02-19 03:14:50.611576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:50.611603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:50.611615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:50.611624 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:50.611632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:50.611641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:50.611656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:50.611664 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:50.611678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:50.611693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:51.967914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:51.968025 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:51.968040 | orchestrator | 2026-02-19 03:14:51.968051 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-19 03:14:51.968061 | orchestrator | Thursday 19 February 2026 03:14:50 +0000 (0:00:00.988) 0:00:43.772 ***** 2026-02-19 03:14:51.968071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:51.968081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:51.968113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:51.968123 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:51.968132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:51.968155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:51.968181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:51.968191 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:51.968200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:51.968209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:51.968225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:51.968234 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:51.968242 | orchestrator | 2026-02-19 03:14:51.968251 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-19 03:14:51.968260 | orchestrator | Thursday 19 February 2026 03:14:51 +0000 (0:00:00.604) 0:00:44.377 ***** 2026-02-19 03:14:51.968269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 03:14:51.968278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:51.968299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:58.483924 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:58.484058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 03:14:58.484090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:58.484132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:58.484143 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:58.484154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 03:14:58.484180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 03:14:58.484191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 03:14:58.484201 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:58.484211 | orchestrator | 2026-02-19 03:14:58.484222 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-19 03:14:58.484233 | orchestrator | Thursday 19 February 2026 03:14:51 +0000 (0:00:00.752) 0:00:45.129 ***** 2026-02-19 03:14:58.484242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-19 03:14:58.484271 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-19 03:14:58.484281 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-19 03:14:58.484291 | orchestrator | 2026-02-19 03:14:58.484301 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-19 03:14:58.484311 | orchestrator | Thursday 19 February 2026 03:14:53 +0000 (0:00:01.695) 0:00:46.824 ***** 2026-02-19 03:14:58.484321 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-19 03:14:58.484331 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-19 03:14:58.484341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-19 03:14:58.484350 | orchestrator | 2026-02-19 03:14:58.484367 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-19 03:14:58.484384 | orchestrator | Thursday 19 February 2026 03:14:55 +0000 (0:00:01.637) 0:00:48.462 ***** 2026-02-19 03:14:58.484400 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 03:14:58.484418 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 03:14:58.484434 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 03:14:58.484449 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:14:58.484464 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 03:14:58.484478 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 03:14:58.484494 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:14:58.484510 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 03:14:58.484526 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:14:58.484543 | orchestrator | 2026-02-19 03:14:58.484561 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-19 03:14:58.484578 | orchestrator | Thursday 19 February 2026 03:14:56 +0000 (0:00:00.790) 0:00:49.252 ***** 2026-02-19 03:14:58.484595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:58.484613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:58.484639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 03:14:58.484673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:15:02.517378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:15:02.517488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 03:15:02.517507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:15:02.517521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:15:02.517533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 03:15:02.517545 | orchestrator | 2026-02-19 03:15:02.517577 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-19 03:15:02.517591 | orchestrator | Thursday 19 February 2026 03:14:58 +0000 (0:00:02.396) 0:00:51.649 ***** 2026-02-19 03:15:02.517602 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:15:02.517614 | orchestrator | 2026-02-19 03:15:02.517625 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-19 03:15:02.517636 | orchestrator | Thursday 19 February 2026 03:14:59 +0000 (0:00:00.837) 0:00:52.487 ***** 2026-02-19 03:15:02.517667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 03:15:02.517702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 03:15:02.517715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:02.517815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 03:15:02.517829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 03:15:02.517848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 03:15:02.517868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 03:15:03.135220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:03.135313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 03:15:03.135326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 03:15:03.135337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:03.135362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 03:15:03.135372 | orchestrator | 2026-02-19 03:15:03.135383 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-19 03:15:03.135393 | orchestrator | Thursday 19 February 2026 03:15:02 +0000 (0:00:03.189) 0:00:55.676 ***** 2026-02-19 03:15:03.135403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 03:15:03.135447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 03:15:03.135459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:03.135468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 03:15:03.135477 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:03.135487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 03:15:03.135501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 03:15:03.135516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:03.135531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.195545 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:11.195652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 03:15:11.195672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 03:15:11.195685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.195697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.195797 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:11.195812 | orchestrator | 2026-02-19 03:15:11.195825 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-19 03:15:11.195837 | orchestrator | Thursday 19 February 2026 03:15:03 +0000 (0:00:00.624) 0:00:56.301 ***** 2026-02-19 03:15:11.195849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-19 03:15:11.195862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-19 03:15:11.195875 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:11.195902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-19 03:15:11.195914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-19 03:15:11.195925 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:11.195936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-19 03:15:11.195963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-19 03:15:11.195975 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:11.195985 | orchestrator | 2026-02-19 03:15:11.195997 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-19 03:15:11.196008 | orchestrator | Thursday 19 February 2026 03:15:04 +0000 (0:00:01.060) 0:00:57.362 ***** 2026-02-19 03:15:11.196018 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:15:11.196029 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:15:11.196040 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:15:11.196050 | orchestrator | 2026-02-19 03:15:11.196062 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-19 03:15:11.196073 | orchestrator | Thursday 19 February 2026 03:15:05 +0000 (0:00:01.255) 0:00:58.617 ***** 2026-02-19 03:15:11.196086 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:15:11.196100 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:15:11.196119 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:15:11.196144 | orchestrator | 2026-02-19 03:15:11.196173 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-19 03:15:11.196193 | orchestrator | Thursday 19 February 2026 03:15:07 +0000 (0:00:01.873) 0:01:00.491 ***** 2026-02-19 03:15:11.196213 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:15:11.196232 | orchestrator | 2026-02-19 03:15:11.196251 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-19 03:15:11.196272 | orchestrator | Thursday 19 February 2026 03:15:07 +0000 (0:00:00.573) 0:01:01.064 ***** 2026-02-19 03:15:11.196295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 03:15:11.196345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.196371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.196406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 03:15:11.781258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.781327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.781350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 03:15:11.781364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.781368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.781373 | orchestrator | 2026-02-19 03:15:11.781378 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-19 03:15:11.781383 | orchestrator | Thursday 19 February 2026 03:15:11 +0000 (0:00:03.294) 0:01:04.359 ***** 2026-02-19 03:15:11.781399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 03:15:11.781404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.781412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.781416 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:11.781424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 03:15:11.781428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.781433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:11.781437 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:11.781445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 03:15:20.896009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 03:15:20.896103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:20.896115 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:20.896125 | orchestrator | 2026-02-19 03:15:20.896138 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-19 03:15:20.896152 | orchestrator | Thursday 19 February 2026 03:15:11 +0000 (0:00:00.587) 0:01:04.946 ***** 2026-02-19 03:15:20.896189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-19 03:15:20.896205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-19 03:15:20.896219 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:20.896232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-19 03:15:20.896245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-19 03:15:20.896258 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:20.896269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-19 03:15:20.896282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-19 03:15:20.896296 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:20.896310 | orchestrator | 2026-02-19 03:15:20.896323 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-19 03:15:20.896337 | orchestrator | Thursday 19 February 2026 03:15:12 +0000 (0:00:00.811) 0:01:05.757 ***** 2026-02-19 03:15:20.896350 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:15:20.896363 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:15:20.896377 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:15:20.896390 | orchestrator | 2026-02-19 03:15:20.896403 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-19 03:15:20.896417 | orchestrator | Thursday 19 February 2026 03:15:14 +0000 (0:00:01.507) 0:01:07.266 ***** 2026-02-19 03:15:20.896452 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:15:20.896465 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:15:20.896478 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:15:20.896490 | orchestrator | 2026-02-19 03:15:20.896502 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-19 03:15:20.896515 | orchestrator | Thursday 19 February 2026 03:15:15 +0000 (0:00:01.895) 0:01:09.161 ***** 2026-02-19 03:15:20.896527 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:20.896540 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:20.896553 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:20.896566 | orchestrator | 2026-02-19 03:15:20.896579 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-19 03:15:20.896591 | orchestrator | Thursday 19 February 2026 03:15:16 +0000 (0:00:00.302) 0:01:09.463 ***** 2026-02-19 03:15:20.896604 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:15:20.896615 | orchestrator | 2026-02-19 03:15:20.896623 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-19 03:15:20.896646 | orchestrator | Thursday 19 February 2026 03:15:16 +0000 (0:00:00.608) 0:01:10.072 ***** 2026-02-19 03:15:20.896659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-19 03:15:20.896677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-19 03:15:20.896686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-19 03:15:20.896695 | orchestrator | 2026-02-19 03:15:20.896703 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-19 03:15:20.896713 | orchestrator | Thursday 19 February 2026 03:15:19 +0000 (0:00:02.652) 0:01:12.725 ***** 2026-02-19 03:15:20.896753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-19 03:15:20.896769 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:20.896785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-19 03:15:28.353674 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:28.353877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-19 03:15:28.353902 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:28.353915 | orchestrator | 2026-02-19 03:15:28.353927 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-19 03:15:28.353939 | orchestrator | Thursday 19 February 2026 03:15:20 +0000 (0:00:01.337) 0:01:14.062 ***** 2026-02-19 03:15:28.353970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 03:15:28.353985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 03:15:28.353998 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:28.354009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 03:15:28.354183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 03:15:28.354199 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:28.354213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 03:15:28.354225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 03:15:28.354238 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:28.354250 | orchestrator | 2026-02-19 03:15:28.354263 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-19 03:15:28.354275 | orchestrator | Thursday 19 February 2026 03:15:22 +0000 (0:00:01.605) 0:01:15.668 ***** 2026-02-19 03:15:28.354288 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:28.354300 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:28.354312 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:28.354324 | orchestrator | 2026-02-19 03:15:28.354341 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-19 03:15:28.354373 | orchestrator | Thursday 19 February 2026 03:15:22 +0000 (0:00:00.409) 0:01:16.077 ***** 2026-02-19 03:15:28.354386 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:28.354398 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:28.354410 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:28.354423 | orchestrator | 2026-02-19 03:15:28.354435 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-19 03:15:28.354448 | orchestrator | Thursday 19 February 2026 03:15:24 +0000 (0:00:01.301) 0:01:17.379 ***** 2026-02-19 03:15:28.354460 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:15:28.354472 | orchestrator | 2026-02-19 03:15:28.354485 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-19 03:15:28.354497 | orchestrator | Thursday 19 February 2026 03:15:25 +0000 (0:00:00.877) 0:01:18.257 ***** 2026-02-19 03:15:28.354516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 03:15:28.354540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:15:28.354553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 03:15:28.354567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 03:15:28.354586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 03:15:29.052340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 03:15:29.052499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052600 | orchestrator | 2026-02-19 03:15:29.052611 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-19 03:15:29.052620 | orchestrator | Thursday 19 February 2026 03:15:28 +0000 (0:00:03.353) 0:01:21.610 ***** 2026-02-19 03:15:29.052629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 03:15:29.052637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 03:15:29.052660 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:29.052680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 03:15:35.515066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:15:35.515149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 03:15:35.515158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 03:15:35.515166 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:35.515176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 03:15:35.515183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:15:35.515247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 03:15:35.515263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 03:15:35.515273 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:35.515283 | orchestrator | 2026-02-19 03:15:35.515295 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-19 03:15:35.515307 | orchestrator | Thursday 19 February 2026 03:15:29 +0000 (0:00:00.708) 0:01:22.319 ***** 2026-02-19 03:15:35.515319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-19 03:15:35.515330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-19 03:15:35.515342 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:35.515353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-19 03:15:35.515364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-19 03:15:35.515375 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:35.515386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-19 03:15:35.515397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-19 03:15:35.515409 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:35.515421 | orchestrator | 2026-02-19 03:15:35.515432 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-19 03:15:35.515443 | orchestrator | Thursday 19 February 2026 03:15:30 +0000 (0:00:01.246) 0:01:23.566 ***** 2026-02-19 03:15:35.515450 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:15:35.515463 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:15:35.515470 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:15:35.515476 | orchestrator | 2026-02-19 03:15:35.515482 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-19 03:15:35.515488 | orchestrator | Thursday 19 February 2026 03:15:31 +0000 (0:00:01.289) 0:01:24.855 ***** 2026-02-19 03:15:35.515494 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:15:35.515504 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:15:35.515514 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:15:35.515532 | orchestrator | 2026-02-19 03:15:35.515542 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-19 03:15:35.515551 | orchestrator | Thursday 19 February 2026 03:15:33 +0000 (0:00:02.210) 0:01:27.066 ***** 2026-02-19 03:15:35.515561 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:35.515572 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:35.515583 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:35.515593 | orchestrator | 2026-02-19 03:15:35.515604 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-19 03:15:35.515612 | orchestrator | Thursday 19 February 2026 03:15:34 +0000 (0:00:00.352) 0:01:27.419 ***** 2026-02-19 03:15:35.515618 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:35.515625 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:35.515631 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:35.515637 | orchestrator | 2026-02-19 03:15:35.515643 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-19 03:15:35.515649 | orchestrator | Thursday 19 February 2026 03:15:34 +0000 (0:00:00.305) 0:01:27.724 ***** 2026-02-19 03:15:35.515655 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:15:35.515661 | orchestrator | 2026-02-19 03:15:35.515667 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-19 03:15:35.515691 | orchestrator | Thursday 19 February 2026 03:15:35 +0000 (0:00:00.949) 0:01:28.674 ***** 2026-02-19 03:15:38.771447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 03:15:38.771556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 03:15:38.771569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 03:15:38.771598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 03:15:38.771607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 03:15:38.771644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 03:15:38.771654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 03:15:38.771662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 03:15:38.771670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 03:15:38.771683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:38.771691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 03:15:38.771698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 03:15:38.771714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.608901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.609000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 03:15:39.609037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 03:15:39.609050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.609060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.609084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.609111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.609122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.609139 | orchestrator | 2026-02-19 03:15:39.609151 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-19 03:15:39.609162 | orchestrator | Thursday 19 February 2026 03:15:39 +0000 (0:00:03.521) 0:01:32.195 ***** 2026-02-19 03:15:39.609172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 03:15:39.609183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 03:15:39.609193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.609203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 03:15:39.609221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.035289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.035385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.035396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 03:15:40.035403 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:40.035412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 03:15:40.035938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.035961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.035985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.036001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.036011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.036017 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:40.036025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 03:15:40.036031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 03:15:40.036038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 03:15:40.036060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 03:15:49.509348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 03:15:49.509475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:15:49.509493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 03:15:49.509507 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:49.509521 | orchestrator | 2026-02-19 03:15:49.509533 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-19 03:15:49.509576 | orchestrator | Thursday 19 February 2026 03:15:40 +0000 (0:00:01.007) 0:01:33.202 ***** 2026-02-19 03:15:49.509589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-19 03:15:49.509602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-19 03:15:49.509615 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:49.509626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-19 03:15:49.509637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-19 03:15:49.509648 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:49.509659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-19 03:15:49.509691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-19 03:15:49.509702 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:49.509713 | orchestrator | 2026-02-19 03:15:49.509724 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-19 03:15:49.509735 | orchestrator | Thursday 19 February 2026 03:15:41 +0000 (0:00:01.167) 0:01:34.370 ***** 2026-02-19 03:15:49.509842 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:15:49.509856 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:15:49.509867 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:15:49.509878 | orchestrator | 2026-02-19 03:15:49.509889 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-19 03:15:49.509900 | orchestrator | Thursday 19 February 2026 03:15:42 +0000 (0:00:01.297) 0:01:35.667 ***** 2026-02-19 03:15:49.509911 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:15:49.509922 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:15:49.509933 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:15:49.509944 | orchestrator | 2026-02-19 03:15:49.509955 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-19 03:15:49.509966 | orchestrator | Thursday 19 February 2026 03:15:44 +0000 (0:00:01.986) 0:01:37.653 ***** 2026-02-19 03:15:49.509995 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:49.510006 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:49.510071 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:49.510085 | orchestrator | 2026-02-19 03:15:49.510096 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-19 03:15:49.510107 | orchestrator | Thursday 19 February 2026 03:15:44 +0000 (0:00:00.295) 0:01:37.949 ***** 2026-02-19 03:15:49.510118 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:15:49.510128 | orchestrator | 2026-02-19 03:15:49.510139 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-19 03:15:49.510150 | orchestrator | Thursday 19 February 2026 03:15:45 +0000 (0:00:00.969) 0:01:38.919 ***** 2026-02-19 03:15:49.510171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 03:15:49.510188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 03:15:49.510228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 03:15:52.337096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 03:15:52.337194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 03:15:52.337212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 03:15:52.337222 | orchestrator | 2026-02-19 03:15:52.337227 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-19 03:15:52.337232 | orchestrator | Thursday 19 February 2026 03:15:49 +0000 (0:00:03.865) 0:01:42.784 ***** 2026-02-19 03:15:52.337243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 03:15:52.337257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 03:15:55.865466 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:15:55.865565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 03:15:55.865593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 03:15:55.865618 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:15:55.865640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 03:15:55.865652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 03:15:55.865665 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:15:55.865672 | orchestrator | 2026-02-19 03:15:55.865680 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-19 03:15:55.865689 | orchestrator | Thursday 19 February 2026 03:15:52 +0000 (0:00:02.834) 0:01:45.619 ***** 2026-02-19 03:15:55.865696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 03:15:55.865709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 03:16:04.060237 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:04.060322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 03:16:04.060333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 03:16:04.060343 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:04.060353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 03:16:04.060378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 03:16:04.060389 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:04.060399 | orchestrator | 2026-02-19 03:16:04.060410 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-19 03:16:04.060420 | orchestrator | Thursday 19 February 2026 03:15:55 +0000 (0:00:03.408) 0:01:49.027 ***** 2026-02-19 03:16:04.060453 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:04.060463 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:04.060472 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:04.060482 | orchestrator | 2026-02-19 03:16:04.060493 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-19 03:16:04.060504 | orchestrator | Thursday 19 February 2026 03:15:57 +0000 (0:00:01.283) 0:01:50.311 ***** 2026-02-19 03:16:04.060515 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:04.060525 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:04.060535 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:04.060544 | orchestrator | 2026-02-19 03:16:04.060553 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-19 03:16:04.060564 | orchestrator | Thursday 19 February 2026 03:15:59 +0000 (0:00:01.995) 0:01:52.307 ***** 2026-02-19 03:16:04.060573 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:04.060583 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:04.060592 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:04.060601 | orchestrator | 2026-02-19 03:16:04.060611 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-19 03:16:04.060621 | orchestrator | Thursday 19 February 2026 03:15:59 +0000 (0:00:00.342) 0:01:52.649 ***** 2026-02-19 03:16:04.060630 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:16:04.060639 | orchestrator | 2026-02-19 03:16:04.060645 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-19 03:16:04.060651 | orchestrator | Thursday 19 February 2026 03:16:00 +0000 (0:00:01.060) 0:01:53.710 ***** 2026-02-19 03:16:04.060672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 03:16:04.060681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 03:16:04.060687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 03:16:04.060693 | orchestrator | 2026-02-19 03:16:04.060699 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-19 03:16:04.060705 | orchestrator | Thursday 19 February 2026 03:16:03 +0000 (0:00:02.943) 0:01:56.653 ***** 2026-02-19 03:16:04.060719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 03:16:04.060726 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:04.060732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 03:16:04.060738 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:04.060744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 03:16:04.060829 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:04.060864 | orchestrator | 2026-02-19 03:16:04.060872 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-19 03:16:04.060879 | orchestrator | Thursday 19 February 2026 03:16:03 +0000 (0:00:00.384) 0:01:57.038 ***** 2026-02-19 03:16:04.060887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-19 03:16:04.060902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-19 03:16:12.635269 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:12.635388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-19 03:16:12.635408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-19 03:16:12.635425 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:12.635438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-19 03:16:12.635450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-19 03:16:12.635492 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:12.635505 | orchestrator | 2026-02-19 03:16:12.635552 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-19 03:16:12.635567 | orchestrator | Thursday 19 February 2026 03:16:04 +0000 (0:00:00.825) 0:01:57.863 ***** 2026-02-19 03:16:12.635579 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:12.635591 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:12.635605 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:12.635619 | orchestrator | 2026-02-19 03:16:12.635633 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-19 03:16:12.635647 | orchestrator | Thursday 19 February 2026 03:16:06 +0000 (0:00:01.332) 0:01:59.196 ***** 2026-02-19 03:16:12.635660 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:12.635673 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:12.635686 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:12.635699 | orchestrator | 2026-02-19 03:16:12.635712 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-19 03:16:12.635743 | orchestrator | Thursday 19 February 2026 03:16:08 +0000 (0:00:02.053) 0:02:01.249 ***** 2026-02-19 03:16:12.635795 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:12.635808 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:12.635822 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:12.635835 | orchestrator | 2026-02-19 03:16:12.635849 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-19 03:16:12.635862 | orchestrator | Thursday 19 February 2026 03:16:08 +0000 (0:00:00.328) 0:02:01.578 ***** 2026-02-19 03:16:12.635875 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:16:12.635887 | orchestrator | 2026-02-19 03:16:12.635901 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-19 03:16:12.635916 | orchestrator | Thursday 19 February 2026 03:16:09 +0000 (0:00:01.075) 0:02:02.653 ***** 2026-02-19 03:16:12.635962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 03:16:12.636000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 03:16:12.636027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 03:16:14.225971 | orchestrator | 2026-02-19 03:16:14.226112 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-19 03:16:14.226150 | orchestrator | Thursday 19 February 2026 03:16:12 +0000 (0:00:03.145) 0:02:05.799 ***** 2026-02-19 03:16:14.226179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 03:16:14.226196 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:14.226236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 03:16:14.226272 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:14.226291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 03:16:14.226304 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:14.226316 | orchestrator | 2026-02-19 03:16:14.226328 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-19 03:16:14.226339 | orchestrator | Thursday 19 February 2026 03:16:13 +0000 (0:00:00.658) 0:02:06.458 ***** 2026-02-19 03:16:14.226352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-19 03:16:14.226376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 03:16:14.226388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-19 03:16:14.226405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 03:16:22.616162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-19 03:16:22.616323 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:22.616352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-19 03:16:22.616376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 03:16:22.616422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-19 03:16:22.616442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 03:16:22.616461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-19 03:16:22.616479 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:22.616497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-19 03:16:22.616514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 03:16:22.616531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-19 03:16:22.616581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 03:16:22.616601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-19 03:16:22.616617 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:22.616634 | orchestrator | 2026-02-19 03:16:22.616653 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-19 03:16:22.616672 | orchestrator | Thursday 19 February 2026 03:16:14 +0000 (0:00:00.931) 0:02:07.390 ***** 2026-02-19 03:16:22.616690 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:22.616708 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:22.616724 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:22.616740 | orchestrator | 2026-02-19 03:16:22.616784 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-19 03:16:22.616801 | orchestrator | Thursday 19 February 2026 03:16:15 +0000 (0:00:01.563) 0:02:08.953 ***** 2026-02-19 03:16:22.616818 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:22.616834 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:22.616851 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:22.616866 | orchestrator | 2026-02-19 03:16:22.616883 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-19 03:16:22.616899 | orchestrator | Thursday 19 February 2026 03:16:17 +0000 (0:00:01.959) 0:02:10.913 ***** 2026-02-19 03:16:22.616916 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:22.616932 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:22.616975 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:22.616992 | orchestrator | 2026-02-19 03:16:22.617008 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-19 03:16:22.617026 | orchestrator | Thursday 19 February 2026 03:16:18 +0000 (0:00:00.294) 0:02:11.207 ***** 2026-02-19 03:16:22.617042 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:22.617059 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:22.617075 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:22.617091 | orchestrator | 2026-02-19 03:16:22.617108 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-19 03:16:22.617124 | orchestrator | Thursday 19 February 2026 03:16:18 +0000 (0:00:00.315) 0:02:11.523 ***** 2026-02-19 03:16:22.617141 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:16:22.617157 | orchestrator | 2026-02-19 03:16:22.617174 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-19 03:16:22.617191 | orchestrator | Thursday 19 February 2026 03:16:19 +0000 (0:00:01.142) 0:02:12.666 ***** 2026-02-19 03:16:22.617223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:16:22.617264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:16:22.617284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:16:22.617304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:16:22.617335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:16:23.170233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:16:23.170360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:16:23.170395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:16:23.170405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:16:23.170414 | orchestrator | 2026-02-19 03:16:23.170424 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-19 03:16:23.170434 | orchestrator | Thursday 19 February 2026 03:16:22 +0000 (0:00:03.111) 0:02:15.778 ***** 2026-02-19 03:16:23.170461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:16:23.170477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:16:23.170486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:16:23.170501 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:23.170512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:16:23.170520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:16:23.170529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:16:23.170537 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:23.170557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:16:32.066645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:16:32.066739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:16:32.066748 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:32.066811 | orchestrator | 2026-02-19 03:16:32.066821 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-19 03:16:32.066829 | orchestrator | Thursday 19 February 2026 03:16:23 +0000 (0:00:00.554) 0:02:16.332 ***** 2026-02-19 03:16:32.066836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-19 03:16:32.066845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-19 03:16:32.066853 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:32.066860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-19 03:16:32.066866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-19 03:16:32.066873 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:32.066880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-19 03:16:32.066886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-19 03:16:32.066892 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:32.066898 | orchestrator | 2026-02-19 03:16:32.066904 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-19 03:16:32.066911 | orchestrator | Thursday 19 February 2026 03:16:24 +0000 (0:00:00.986) 0:02:17.319 ***** 2026-02-19 03:16:32.066917 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:32.066923 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:32.066948 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:32.066954 | orchestrator | 2026-02-19 03:16:32.066960 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-19 03:16:32.066966 | orchestrator | Thursday 19 February 2026 03:16:25 +0000 (0:00:01.286) 0:02:18.605 ***** 2026-02-19 03:16:32.066971 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:32.066977 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:32.066983 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:32.066989 | orchestrator | 2026-02-19 03:16:32.066995 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-19 03:16:32.067000 | orchestrator | Thursday 19 February 2026 03:16:27 +0000 (0:00:01.985) 0:02:20.591 ***** 2026-02-19 03:16:32.067006 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:32.067025 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:32.067031 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:32.067036 | orchestrator | 2026-02-19 03:16:32.067042 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-19 03:16:32.067063 | orchestrator | Thursday 19 February 2026 03:16:27 +0000 (0:00:00.291) 0:02:20.883 ***** 2026-02-19 03:16:32.067069 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:16:32.067076 | orchestrator | 2026-02-19 03:16:32.067082 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-19 03:16:32.067090 | orchestrator | Thursday 19 February 2026 03:16:28 +0000 (0:00:01.177) 0:02:22.060 ***** 2026-02-19 03:16:32.067098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 03:16:32.067108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:16:32.067115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 03:16:32.067130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 03:16:32.067141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:16:37.209290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:16:37.210133 | orchestrator | 2026-02-19 03:16:37.210163 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-19 03:16:37.210171 | orchestrator | Thursday 19 February 2026 03:16:32 +0000 (0:00:03.162) 0:02:25.223 ***** 2026-02-19 03:16:37.210179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 03:16:37.210216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:16:37.210234 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:37.210272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 03:16:37.210299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:16:37.210305 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:37.210311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 03:16:37.210317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:16:37.210329 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:37.210335 | orchestrator | 2026-02-19 03:16:37.210342 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-19 03:16:37.210348 | orchestrator | Thursday 19 February 2026 03:16:32 +0000 (0:00:00.621) 0:02:25.844 ***** 2026-02-19 03:16:37.210356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-19 03:16:37.210364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-19 03:16:37.210373 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:37.210379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-19 03:16:37.210385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-19 03:16:37.210392 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:37.210399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-19 03:16:37.210403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-19 03:16:37.210407 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:37.210411 | orchestrator | 2026-02-19 03:16:37.210418 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-19 03:16:37.210421 | orchestrator | Thursday 19 February 2026 03:16:33 +0000 (0:00:00.851) 0:02:26.695 ***** 2026-02-19 03:16:37.210425 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:37.210429 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:37.210433 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:37.210437 | orchestrator | 2026-02-19 03:16:37.210440 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-19 03:16:37.210444 | orchestrator | Thursday 19 February 2026 03:16:35 +0000 (0:00:01.590) 0:02:28.286 ***** 2026-02-19 03:16:37.210448 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:37.210451 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:37.210455 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:37.210459 | orchestrator | 2026-02-19 03:16:37.210463 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-19 03:16:37.210471 | orchestrator | Thursday 19 February 2026 03:16:37 +0000 (0:00:02.068) 0:02:30.354 ***** 2026-02-19 03:16:41.568109 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:16:41.568187 | orchestrator | 2026-02-19 03:16:41.568196 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-19 03:16:41.568201 | orchestrator | Thursday 19 February 2026 03:16:38 +0000 (0:00:00.997) 0:02:31.352 ***** 2026-02-19 03:16:41.568209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 03:16:41.568232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:16:41.568238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 03:16:41.568245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 03:16:41.568260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 03:16:41.568278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:16:41.568284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 03:16:41.568293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 03:16:41.568297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 03:16:41.568302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:16:41.568310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 03:16:41.568319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.538243 | orchestrator | 2026-02-19 03:16:42.539327 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-19 03:16:42.539378 | orchestrator | Thursday 19 February 2026 03:16:41 +0000 (0:00:03.461) 0:02:34.814 ***** 2026-02-19 03:16:42.539415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 03:16:42.539429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.539440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.539450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.539460 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:42.539484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 03:16:42.539517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.539534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.539543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.539552 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:42.539561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 03:16:42.539570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.539584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 03:16:42.539601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 03:16:53.584452 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:53.584554 | orchestrator | 2026-02-19 03:16:53.584570 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-19 03:16:53.584579 | orchestrator | Thursday 19 February 2026 03:16:42 +0000 (0:00:00.971) 0:02:35.785 ***** 2026-02-19 03:16:53.584585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-19 03:16:53.584593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-19 03:16:53.584600 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:53.584608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-19 03:16:53.584614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-19 03:16:53.584621 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:53.584627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-19 03:16:53.584633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-19 03:16:53.584639 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:53.584646 | orchestrator | 2026-02-19 03:16:53.584652 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-19 03:16:53.584659 | orchestrator | Thursday 19 February 2026 03:16:43 +0000 (0:00:00.871) 0:02:36.657 ***** 2026-02-19 03:16:53.584665 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:53.584671 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:53.584678 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:53.584684 | orchestrator | 2026-02-19 03:16:53.584690 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-19 03:16:53.584697 | orchestrator | Thursday 19 February 2026 03:16:44 +0000 (0:00:01.325) 0:02:37.982 ***** 2026-02-19 03:16:53.584703 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:16:53.584709 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:16:53.584715 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:16:53.584722 | orchestrator | 2026-02-19 03:16:53.584728 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-19 03:16:53.584734 | orchestrator | Thursday 19 February 2026 03:16:46 +0000 (0:00:02.020) 0:02:40.003 ***** 2026-02-19 03:16:53.584740 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:16:53.584746 | orchestrator | 2026-02-19 03:16:53.584753 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-19 03:16:53.584759 | orchestrator | Thursday 19 February 2026 03:16:48 +0000 (0:00:01.248) 0:02:41.251 ***** 2026-02-19 03:16:53.584812 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 03:16:53.584820 | orchestrator | 2026-02-19 03:16:53.584826 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-19 03:16:53.584847 | orchestrator | Thursday 19 February 2026 03:16:51 +0000 (0:00:03.250) 0:02:44.502 ***** 2026-02-19 03:16:53.584879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:16:53.584889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 03:16:53.584897 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:53.584907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:16:53.584919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 03:16:53.584925 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:53.584941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:16:55.817490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 03:16:55.817631 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:16:55.817657 | orchestrator | 2026-02-19 03:16:55.817680 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-19 03:16:55.817702 | orchestrator | Thursday 19 February 2026 03:16:53 +0000 (0:00:02.240) 0:02:46.743 ***** 2026-02-19 03:16:55.817874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:16:55.817897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 03:16:55.817909 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:16:55.817947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:16:55.817982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 03:16:55.817994 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:16:55.818006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:16:55.818083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 03:17:05.291269 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:05.291371 | orchestrator | 2026-02-19 03:17:05.291386 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-19 03:17:05.291397 | orchestrator | Thursday 19 February 2026 03:16:55 +0000 (0:00:02.236) 0:02:48.979 ***** 2026-02-19 03:17:05.291408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 03:17:05.291438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 03:17:05.291465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 03:17:05.291471 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:05.291476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 03:17:05.291481 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:05.291485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 03:17:05.291490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 03:17:05.291495 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:05.291506 | orchestrator | 2026-02-19 03:17:05.291511 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-19 03:17:05.291516 | orchestrator | Thursday 19 February 2026 03:16:58 +0000 (0:00:02.848) 0:02:51.828 ***** 2026-02-19 03:17:05.291521 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:05.291543 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:05.291548 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:05.291552 | orchestrator | 2026-02-19 03:17:05.291557 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-19 03:17:05.291562 | orchestrator | Thursday 19 February 2026 03:17:00 +0000 (0:00:02.097) 0:02:53.925 ***** 2026-02-19 03:17:05.291566 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:05.291571 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:05.291575 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:05.291580 | orchestrator | 2026-02-19 03:17:05.291585 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-19 03:17:05.291589 | orchestrator | Thursday 19 February 2026 03:17:02 +0000 (0:00:01.389) 0:02:55.315 ***** 2026-02-19 03:17:05.291594 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:05.291598 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:05.291603 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:05.291607 | orchestrator | 2026-02-19 03:17:05.291612 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-19 03:17:05.291616 | orchestrator | Thursday 19 February 2026 03:17:02 +0000 (0:00:00.278) 0:02:55.594 ***** 2026-02-19 03:17:05.291621 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:17:05.291627 | orchestrator | 2026-02-19 03:17:05.291635 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-19 03:17:05.291642 | orchestrator | Thursday 19 February 2026 03:17:03 +0000 (0:00:01.311) 0:02:56.905 ***** 2026-02-19 03:17:05.291654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 03:17:05.291706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 03:17:05.291715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 03:17:05.291723 | orchestrator | 2026-02-19 03:17:05.291731 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-19 03:17:05.291747 | orchestrator | Thursday 19 February 2026 03:17:05 +0000 (0:00:01.453) 0:02:58.359 ***** 2026-02-19 03:17:05.291761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 03:17:13.303104 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:13.303203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 03:17:13.303218 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:13.303228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 03:17:13.303237 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:13.303247 | orchestrator | 2026-02-19 03:17:13.303257 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-19 03:17:13.303267 | orchestrator | Thursday 19 February 2026 03:17:05 +0000 (0:00:00.371) 0:02:58.731 ***** 2026-02-19 03:17:13.303279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-19 03:17:13.303289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-19 03:17:13.303298 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:13.303307 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:13.303316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-19 03:17:13.303346 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:13.303354 | orchestrator | 2026-02-19 03:17:13.303399 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-19 03:17:13.303409 | orchestrator | Thursday 19 February 2026 03:17:06 +0000 (0:00:00.799) 0:02:59.530 ***** 2026-02-19 03:17:13.303417 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:13.303425 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:13.303434 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:13.303442 | orchestrator | 2026-02-19 03:17:13.303449 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-19 03:17:13.303457 | orchestrator | Thursday 19 February 2026 03:17:06 +0000 (0:00:00.450) 0:02:59.981 ***** 2026-02-19 03:17:13.303466 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:13.303474 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:13.303482 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:13.303490 | orchestrator | 2026-02-19 03:17:13.303498 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-19 03:17:13.303506 | orchestrator | Thursday 19 February 2026 03:17:08 +0000 (0:00:01.210) 0:03:01.191 ***** 2026-02-19 03:17:13.303514 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:13.303523 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:13.303531 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:13.303540 | orchestrator | 2026-02-19 03:17:13.303548 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-19 03:17:13.303557 | orchestrator | Thursday 19 February 2026 03:17:08 +0000 (0:00:00.299) 0:03:01.491 ***** 2026-02-19 03:17:13.303566 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:17:13.303574 | orchestrator | 2026-02-19 03:17:13.303583 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-19 03:17:13.303592 | orchestrator | Thursday 19 February 2026 03:17:09 +0000 (0:00:01.414) 0:03:02.906 ***** 2026-02-19 03:17:13.303622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:17:13.303640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.303648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.303667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.303676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-19 03:17:13.303693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.467048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.467221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.467244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.467279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:17:13.467292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:13.467306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.467337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.467357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-19 03:17:13.467376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.467389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.467401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.467420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 03:17:13.621374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.621466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:13.621512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-19 03:17:13.621521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.621527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:17:13.621549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.621567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.621579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.621584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-19 03:17:13.621590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.621596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.621611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.805240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.805358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.805382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.805401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:13.805419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.805438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.805489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:13.805519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-19 03:17:13.805530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.805541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.805551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-19 03:17:13.805561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:13.805571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:13.805594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 03:17:14.829294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.829397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:14.829412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 03:17:14.829425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:14.829437 | orchestrator | 2026-02-19 03:17:14.829448 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-19 03:17:14.829510 | orchestrator | Thursday 19 February 2026 03:17:13 +0000 (0:00:04.063) 0:03:06.969 ***** 2026-02-19 03:17:14.829531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:17:14.829553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.829560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.829567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.829572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-19 03:17:14.829586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.829597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:14.912954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:17:14.913055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:14.913072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.913085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.913134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.913166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:14.913178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.913190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.913201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-19 03:17:14.913221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-19 03:17:14.913245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:14.913265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:15.000214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:15.000301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.000321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:15.000345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 03:17:15.000391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.000425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:15.000463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:15.000482 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:15.000499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.000518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:17:15.000546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-19 03:17:15.000563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.000588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:15.245698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.245843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.245900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.245931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 03:17:15.245947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-19 03:17:15.245971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:15.245976 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:15.245983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.245988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:15.245998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:15.246003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:15.246013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:15.246077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:25.022419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-19 03:17:25.022591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-19 03:17:25.022624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 03:17:25.022707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 03:17:25.022740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:17:25.022754 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:25.022768 | orchestrator | 2026-02-19 03:17:25.022814 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-19 03:17:25.022830 | orchestrator | Thursday 19 February 2026 03:17:15 +0000 (0:00:01.443) 0:03:08.413 ***** 2026-02-19 03:17:25.022842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-19 03:17:25.022855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-19 03:17:25.022868 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:25.022898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-19 03:17:25.022912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-19 03:17:25.022925 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:25.022937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-19 03:17:25.022950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-19 03:17:25.022971 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:25.022985 | orchestrator | 2026-02-19 03:17:25.022999 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-19 03:17:25.023011 | orchestrator | Thursday 19 February 2026 03:17:17 +0000 (0:00:01.893) 0:03:10.307 ***** 2026-02-19 03:17:25.023024 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:25.023037 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:25.023051 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:25.023064 | orchestrator | 2026-02-19 03:17:25.023075 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-19 03:17:25.023086 | orchestrator | Thursday 19 February 2026 03:17:18 +0000 (0:00:01.295) 0:03:11.602 ***** 2026-02-19 03:17:25.023096 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:25.023107 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:25.023118 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:25.023128 | orchestrator | 2026-02-19 03:17:25.023139 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-19 03:17:25.023150 | orchestrator | Thursday 19 February 2026 03:17:20 +0000 (0:00:02.080) 0:03:13.682 ***** 2026-02-19 03:17:25.023161 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:17:25.023172 | orchestrator | 2026-02-19 03:17:25.023182 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-19 03:17:25.023193 | orchestrator | Thursday 19 February 2026 03:17:21 +0000 (0:00:01.174) 0:03:14.856 ***** 2026-02-19 03:17:25.023205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:17:25.023224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:17:25.023245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:17:35.498738 | orchestrator | 2026-02-19 03:17:35.498888 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-19 03:17:35.498901 | orchestrator | Thursday 19 February 2026 03:17:25 +0000 (0:00:03.327) 0:03:18.184 ***** 2026-02-19 03:17:35.498911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:17:35.498922 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:35.498930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:17:35.498938 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:35.498959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:17:35.498967 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:35.498973 | orchestrator | 2026-02-19 03:17:35.498980 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-19 03:17:35.498987 | orchestrator | Thursday 19 February 2026 03:17:25 +0000 (0:00:00.505) 0:03:18.690 ***** 2026-02-19 03:17:35.498995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-19 03:17:35.499022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-19 03:17:35.499030 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:35.499037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-19 03:17:35.499058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-19 03:17:35.499065 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:35.499072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-19 03:17:35.499079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-19 03:17:35.499085 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:35.499092 | orchestrator | 2026-02-19 03:17:35.499099 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-19 03:17:35.499105 | orchestrator | Thursday 19 February 2026 03:17:26 +0000 (0:00:00.747) 0:03:19.438 ***** 2026-02-19 03:17:35.499112 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:35.499119 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:35.499125 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:35.499132 | orchestrator | 2026-02-19 03:17:35.499138 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-19 03:17:35.499145 | orchestrator | Thursday 19 February 2026 03:17:28 +0000 (0:00:01.893) 0:03:21.331 ***** 2026-02-19 03:17:35.499152 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:35.499158 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:35.499165 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:35.499171 | orchestrator | 2026-02-19 03:17:35.499178 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-19 03:17:35.499185 | orchestrator | Thursday 19 February 2026 03:17:30 +0000 (0:00:01.856) 0:03:23.188 ***** 2026-02-19 03:17:35.499192 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:17:35.499198 | orchestrator | 2026-02-19 03:17:35.499205 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-19 03:17:35.499216 | orchestrator | Thursday 19 February 2026 03:17:31 +0000 (0:00:01.537) 0:03:24.726 ***** 2026-02-19 03:17:35.499230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 03:17:35.499257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 03:17:35.499278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:17:36.266633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:17:36.266728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:17:36.266742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:17:36.266772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 03:17:36.266875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:17:36.266905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:17:36.266917 | orchestrator | 2026-02-19 03:17:36.266929 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-19 03:17:36.266940 | orchestrator | Thursday 19 February 2026 03:17:35 +0000 (0:00:03.937) 0:03:28.664 ***** 2026-02-19 03:17:36.266952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 03:17:36.266970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:17:36.266986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:17:36.266996 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:36.267015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 03:17:46.081075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:17:46.081174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:17:46.081185 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:46.081210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 03:17:46.081262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 03:17:46.081267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 03:17:46.081271 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:46.081275 | orchestrator | 2026-02-19 03:17:46.081280 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-19 03:17:46.081286 | orchestrator | Thursday 19 February 2026 03:17:36 +0000 (0:00:00.768) 0:03:29.432 ***** 2026-02-19 03:17:46.081303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081326 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:46.081330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081349 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:46.081353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-19 03:17:46.081371 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:46.081375 | orchestrator | 2026-02-19 03:17:46.081379 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-19 03:17:46.081383 | orchestrator | Thursday 19 February 2026 03:17:37 +0000 (0:00:01.038) 0:03:30.471 ***** 2026-02-19 03:17:46.081386 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:46.081390 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:46.081394 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:46.081398 | orchestrator | 2026-02-19 03:17:46.081401 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-19 03:17:46.081405 | orchestrator | Thursday 19 February 2026 03:17:38 +0000 (0:00:01.305) 0:03:31.777 ***** 2026-02-19 03:17:46.081409 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:46.081413 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:46.081416 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:46.081420 | orchestrator | 2026-02-19 03:17:46.081424 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-19 03:17:46.081427 | orchestrator | Thursday 19 February 2026 03:17:40 +0000 (0:00:01.880) 0:03:33.657 ***** 2026-02-19 03:17:46.081431 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:17:46.081435 | orchestrator | 2026-02-19 03:17:46.081439 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-19 03:17:46.081442 | orchestrator | Thursday 19 February 2026 03:17:41 +0000 (0:00:01.364) 0:03:35.021 ***** 2026-02-19 03:17:46.081446 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-19 03:17:46.081452 | orchestrator | 2026-02-19 03:17:46.081456 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-19 03:17:46.081459 | orchestrator | Thursday 19 February 2026 03:17:42 +0000 (0:00:00.733) 0:03:35.755 ***** 2026-02-19 03:17:46.081467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-19 03:17:57.496519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-19 03:17:57.496665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-19 03:17:57.496695 | orchestrator | 2026-02-19 03:17:57.496717 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-19 03:17:57.496738 | orchestrator | Thursday 19 February 2026 03:17:46 +0000 (0:00:03.492) 0:03:39.247 ***** 2026-02-19 03:17:57.496759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:17:57.496779 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:57.496853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:17:57.496873 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:57.496893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:17:57.496913 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:57.496932 | orchestrator | 2026-02-19 03:17:57.496951 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-19 03:17:57.496972 | orchestrator | Thursday 19 February 2026 03:17:47 +0000 (0:00:01.331) 0:03:40.579 ***** 2026-02-19 03:17:57.496991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 03:17:57.497012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 03:17:57.497064 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:57.497083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 03:17:57.497100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 03:17:57.497112 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:57.497144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 03:17:57.497157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 03:17:57.497168 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:57.497179 | orchestrator | 2026-02-19 03:17:57.497190 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-19 03:17:57.497201 | orchestrator | Thursday 19 February 2026 03:17:48 +0000 (0:00:01.423) 0:03:42.002 ***** 2026-02-19 03:17:57.497211 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:57.497222 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:57.497233 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:57.497243 | orchestrator | 2026-02-19 03:17:57.497254 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-19 03:17:57.497265 | orchestrator | Thursday 19 February 2026 03:17:51 +0000 (0:00:02.386) 0:03:44.389 ***** 2026-02-19 03:17:57.497275 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:17:57.497286 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:17:57.497297 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:17:57.497307 | orchestrator | 2026-02-19 03:17:57.497317 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-19 03:17:57.497326 | orchestrator | Thursday 19 February 2026 03:17:54 +0000 (0:00:02.898) 0:03:47.288 ***** 2026-02-19 03:17:57.497336 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-19 03:17:57.497347 | orchestrator | 2026-02-19 03:17:57.497357 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-19 03:17:57.497367 | orchestrator | Thursday 19 February 2026 03:17:55 +0000 (0:00:01.168) 0:03:48.457 ***** 2026-02-19 03:17:57.497384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:17:57.497395 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:57.497405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:17:57.497462 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:17:57.497474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:17:57.497484 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:17:57.497493 | orchestrator | 2026-02-19 03:17:57.497503 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-19 03:17:57.497513 | orchestrator | Thursday 19 February 2026 03:17:56 +0000 (0:00:00.992) 0:03:49.450 ***** 2026-02-19 03:17:57.497522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:17:57.497532 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:17:57.497553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:18:19.640444 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:19.640553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 03:18:19.640570 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:19.640580 | orchestrator | 2026-02-19 03:18:19.640591 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-19 03:18:19.640601 | orchestrator | Thursday 19 February 2026 03:17:57 +0000 (0:00:01.208) 0:03:50.658 ***** 2026-02-19 03:18:19.640611 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:19.640620 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:19.640628 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:19.640637 | orchestrator | 2026-02-19 03:18:19.640646 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-19 03:18:19.640654 | orchestrator | Thursday 19 February 2026 03:17:58 +0000 (0:00:01.451) 0:03:52.109 ***** 2026-02-19 03:18:19.640670 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:18:19.640686 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:18:19.640700 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:18:19.640714 | orchestrator | 2026-02-19 03:18:19.640727 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-19 03:18:19.640741 | orchestrator | Thursday 19 February 2026 03:18:01 +0000 (0:00:02.667) 0:03:54.776 ***** 2026-02-19 03:18:19.640782 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:18:19.640797 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:18:19.640871 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:18:19.640885 | orchestrator | 2026-02-19 03:18:19.640919 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-19 03:18:19.640935 | orchestrator | Thursday 19 February 2026 03:18:04 +0000 (0:00:02.597) 0:03:57.374 ***** 2026-02-19 03:18:19.640952 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-19 03:18:19.640969 | orchestrator | 2026-02-19 03:18:19.640984 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-19 03:18:19.640993 | orchestrator | Thursday 19 February 2026 03:18:05 +0000 (0:00:01.091) 0:03:58.466 ***** 2026-02-19 03:18:19.641003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 03:18:19.641012 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:19.641021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 03:18:19.641030 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:19.641039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 03:18:19.641047 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:19.641056 | orchestrator | 2026-02-19 03:18:19.641065 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-19 03:18:19.641075 | orchestrator | Thursday 19 February 2026 03:18:06 +0000 (0:00:01.224) 0:03:59.690 ***** 2026-02-19 03:18:19.641100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 03:18:19.641110 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:19.641118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 03:18:19.641136 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:19.641146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 03:18:19.641154 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:19.641163 | orchestrator | 2026-02-19 03:18:19.641176 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-19 03:18:19.641185 | orchestrator | Thursday 19 February 2026 03:18:07 +0000 (0:00:01.250) 0:04:00.941 ***** 2026-02-19 03:18:19.641194 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:19.641203 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:19.641211 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:19.641220 | orchestrator | 2026-02-19 03:18:19.641228 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-19 03:18:19.641237 | orchestrator | Thursday 19 February 2026 03:18:09 +0000 (0:00:01.715) 0:04:02.657 ***** 2026-02-19 03:18:19.641245 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:18:19.641254 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:18:19.641262 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:18:19.641270 | orchestrator | 2026-02-19 03:18:19.641279 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-19 03:18:19.641287 | orchestrator | Thursday 19 February 2026 03:18:11 +0000 (0:00:02.288) 0:04:04.946 ***** 2026-02-19 03:18:19.641296 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:18:19.641304 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:18:19.641312 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:18:19.641321 | orchestrator | 2026-02-19 03:18:19.641329 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-19 03:18:19.641338 | orchestrator | Thursday 19 February 2026 03:18:14 +0000 (0:00:03.119) 0:04:08.065 ***** 2026-02-19 03:18:19.641346 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:18:19.641354 | orchestrator | 2026-02-19 03:18:19.641363 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-19 03:18:19.641371 | orchestrator | Thursday 19 February 2026 03:18:16 +0000 (0:00:01.285) 0:04:09.351 ***** 2026-02-19 03:18:19.641381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 03:18:19.641391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 03:18:19.641413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.324103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.324195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:18:20.324205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 03:18:20.324212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 03:18:20.324219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.324240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.324258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 03:18:20.324265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:18:20.324271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 03:18:20.324277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.324282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.324317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:18:20.324324 | orchestrator | 2026-02-19 03:18:20.324331 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-19 03:18:20.324337 | orchestrator | Thursday 19 February 2026 03:18:19 +0000 (0:00:03.586) 0:04:12.938 ***** 2026-02-19 03:18:20.324348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 03:18:20.467404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 03:18:20.467498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.467511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.467520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:18:20.467547 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:20.467558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 03:18:20.467566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 03:18:20.467594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.467602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.467610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:18:20.467622 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:20.467630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 03:18:20.467638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 03:18:20.467645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 03:18:20.467662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 03:18:31.784512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 03:18:31.784601 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:31.784612 | orchestrator | 2026-02-19 03:18:31.784620 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-19 03:18:31.784628 | orchestrator | Thursday 19 February 2026 03:18:20 +0000 (0:00:00.698) 0:04:13.637 ***** 2026-02-19 03:18:31.784636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 03:18:31.784661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 03:18:31.784669 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:31.784676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 03:18:31.784682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 03:18:31.784689 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:31.784695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 03:18:31.784701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 03:18:31.784707 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:31.784713 | orchestrator | 2026-02-19 03:18:31.784720 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-19 03:18:31.784726 | orchestrator | Thursday 19 February 2026 03:18:21 +0000 (0:00:00.880) 0:04:14.517 ***** 2026-02-19 03:18:31.784732 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:18:31.784738 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:18:31.784744 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:18:31.784750 | orchestrator | 2026-02-19 03:18:31.784756 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-19 03:18:31.784762 | orchestrator | Thursday 19 February 2026 03:18:23 +0000 (0:00:01.738) 0:04:16.256 ***** 2026-02-19 03:18:31.784768 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:18:31.784774 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:18:31.784781 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:18:31.784787 | orchestrator | 2026-02-19 03:18:31.784793 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-19 03:18:31.784799 | orchestrator | Thursday 19 February 2026 03:18:25 +0000 (0:00:02.056) 0:04:18.312 ***** 2026-02-19 03:18:31.784842 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:18:31.784850 | orchestrator | 2026-02-19 03:18:31.784856 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-19 03:18:31.784862 | orchestrator | Thursday 19 February 2026 03:18:26 +0000 (0:00:01.373) 0:04:19.685 ***** 2026-02-19 03:18:31.784881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:18:31.784904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:18:31.784917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:18:31.784925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:18:31.784937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:18:31.784951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:18:33.639142 | orchestrator | 2026-02-19 03:18:33.639263 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-19 03:18:33.639281 | orchestrator | Thursday 19 February 2026 03:18:31 +0000 (0:00:05.253) 0:04:24.939 ***** 2026-02-19 03:18:33.639296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:18:33.639314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:18:33.639327 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:33.639359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:18:33.639373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:18:33.639424 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:33.639438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:18:33.639450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:18:33.639497 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:33.639509 | orchestrator | 2026-02-19 03:18:33.639520 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-19 03:18:33.639533 | orchestrator | Thursday 19 February 2026 03:18:32 +0000 (0:00:00.984) 0:04:25.923 ***** 2026-02-19 03:18:33.639544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-19 03:18:33.639557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-19 03:18:33.639572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-19 03:18:33.639600 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:33.639629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-19 03:18:33.639649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-19 03:18:33.639668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-19 03:18:33.639687 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:33.639705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-19 03:18:33.639725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-19 03:18:33.639764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-19 03:18:39.701859 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:39.701953 | orchestrator | 2026-02-19 03:18:39.701967 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-19 03:18:39.701977 | orchestrator | Thursday 19 February 2026 03:18:33 +0000 (0:00:00.876) 0:04:26.799 ***** 2026-02-19 03:18:39.701985 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:39.701993 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:39.702002 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:39.702010 | orchestrator | 2026-02-19 03:18:39.702077 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-19 03:18:39.702087 | orchestrator | Thursday 19 February 2026 03:18:34 +0000 (0:00:00.425) 0:04:27.225 ***** 2026-02-19 03:18:39.702094 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:39.702102 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:39.702110 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:39.702118 | orchestrator | 2026-02-19 03:18:39.702126 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-19 03:18:39.702134 | orchestrator | Thursday 19 February 2026 03:18:35 +0000 (0:00:01.598) 0:04:28.824 ***** 2026-02-19 03:18:39.702142 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:18:39.702150 | orchestrator | 2026-02-19 03:18:39.702158 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-19 03:18:39.702167 | orchestrator | Thursday 19 February 2026 03:18:37 +0000 (0:00:01.642) 0:04:30.466 ***** 2026-02-19 03:18:39.702175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-19 03:18:39.702205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 03:18:39.702221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:39.702227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:39.702247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-19 03:18:39.702253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 03:18:39.702258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 03:18:39.702263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:39.702273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:39.702278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 03:18:39.702286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-19 03:18:39.702291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 03:18:39.702300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.259404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.259531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 03:18:41.259584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-19 03:18:41.259626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-19 03:18:41.259645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-19 03:18:41.259686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.259704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-19 03:18:41.259731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.259755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.259772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 03:18:41.259789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.259806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 03:18:41.259917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-19 03:18:41.965636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-19 03:18:41.965715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.965735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.965741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 03:18:41.965746 | orchestrator | 2026-02-19 03:18:41.965753 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-19 03:18:41.965759 | orchestrator | Thursday 19 February 2026 03:18:41 +0000 (0:00:04.096) 0:04:34.563 ***** 2026-02-19 03:18:41.965765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-19 03:18:41.965771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 03:18:41.965799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.965804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.965834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 03:18:41.965847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-19 03:18:41.965853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-19 03:18:41.965858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:41.965873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:42.080030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 03:18:42.080122 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:42.080154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-19 03:18:42.080165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 03:18:42.080174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:42.080183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:42.080193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-19 03:18:42.080239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 03:18:42.080250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 03:18:42.080267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-19 03:18:42.080274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:42.080280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-19 03:18:42.080293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:42.080301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:43.954067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 03:18:43.954236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:43.954266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-19 03:18:43.954277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 03:18:43.954285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-19 03:18:43.954310 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:43.954320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:43.954344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 03:18:43.954352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 03:18:43.954359 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:43.954366 | orchestrator | 2026-02-19 03:18:43.954375 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-19 03:18:43.954383 | orchestrator | Thursday 19 February 2026 03:18:42 +0000 (0:00:00.823) 0:04:35.386 ***** 2026-02-19 03:18:43.954394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-19 03:18:43.954405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-19 03:18:43.954414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-19 03:18:43.954424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-19 03:18:43.954433 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:43.954440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-19 03:18:43.954455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-19 03:18:43.954462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-19 03:18:43.954469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-19 03:18:43.954476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-19 03:18:43.954482 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:43.954489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-19 03:18:43.954496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-19 03:18:43.954507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-19 03:18:51.837125 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:51.837239 | orchestrator | 2026-02-19 03:18:51.837257 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-19 03:18:51.837270 | orchestrator | Thursday 19 February 2026 03:18:43 +0000 (0:00:01.722) 0:04:37.109 ***** 2026-02-19 03:18:51.837282 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:51.837293 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:51.837304 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:51.837315 | orchestrator | 2026-02-19 03:18:51.837326 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-19 03:18:51.837337 | orchestrator | Thursday 19 February 2026 03:18:44 +0000 (0:00:00.439) 0:04:37.549 ***** 2026-02-19 03:18:51.837348 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:51.837359 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:51.837370 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:51.837381 | orchestrator | 2026-02-19 03:18:51.837392 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-19 03:18:51.837403 | orchestrator | Thursday 19 February 2026 03:18:45 +0000 (0:00:01.540) 0:04:39.089 ***** 2026-02-19 03:18:51.837414 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:18:51.837424 | orchestrator | 2026-02-19 03:18:51.837435 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-19 03:18:51.837446 | orchestrator | Thursday 19 February 2026 03:18:47 +0000 (0:00:01.898) 0:04:40.987 ***** 2026-02-19 03:18:51.837462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:18:51.837505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:18:51.837582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:18:51.837608 | orchestrator | 2026-02-19 03:18:51.837620 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-19 03:18:51.837655 | orchestrator | Thursday 19 February 2026 03:18:49 +0000 (0:00:02.169) 0:04:43.157 ***** 2026-02-19 03:18:51.837670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 03:18:51.837698 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:51.837712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 03:18:51.837725 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:51.837738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 03:18:51.837752 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:51.837764 | orchestrator | 2026-02-19 03:18:51.837777 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-19 03:18:51.837789 | orchestrator | Thursday 19 February 2026 03:18:50 +0000 (0:00:00.416) 0:04:43.574 ***** 2026-02-19 03:18:51.837802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-19 03:18:51.837835 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:18:51.837849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-19 03:18:51.837861 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:18:51.837875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-19 03:18:51.837887 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:18:51.837899 | orchestrator | 2026-02-19 03:18:51.837912 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-19 03:18:51.837925 | orchestrator | Thursday 19 February 2026 03:18:51 +0000 (0:00:00.684) 0:04:44.258 ***** 2026-02-19 03:18:51.837944 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:02.319243 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:02.319357 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:02.319374 | orchestrator | 2026-02-19 03:19:02.319388 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-19 03:19:02.319401 | orchestrator | Thursday 19 February 2026 03:18:52 +0000 (0:00:00.989) 0:04:45.248 ***** 2026-02-19 03:19:02.319412 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:02.319450 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:02.319462 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:02.319473 | orchestrator | 2026-02-19 03:19:02.319484 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-19 03:19:02.319494 | orchestrator | Thursday 19 February 2026 03:18:53 +0000 (0:00:01.297) 0:04:46.545 ***** 2026-02-19 03:19:02.319505 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:19:02.319517 | orchestrator | 2026-02-19 03:19:02.319528 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-19 03:19:02.319539 | orchestrator | Thursday 19 February 2026 03:18:54 +0000 (0:00:01.599) 0:04:48.145 ***** 2026-02-19 03:19:02.319569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 03:19:02.319588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 03:19:02.319600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 03:19:02.319631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 03:19:02.319659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 03:19:02.319671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 03:19:02.319683 | orchestrator | 2026-02-19 03:19:02.319694 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-19 03:19:02.319706 | orchestrator | Thursday 19 February 2026 03:19:01 +0000 (0:00:06.301) 0:04:54.446 ***** 2026-02-19 03:19:02.319717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 03:19:02.319738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 03:19:08.155668 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:08.155926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 03:19:08.155975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 03:19:08.156001 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:08.156024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 03:19:08.156043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 03:19:08.156077 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:08.156090 | orchestrator | 2026-02-19 03:19:08.156103 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-19 03:19:08.156115 | orchestrator | Thursday 19 February 2026 03:19:02 +0000 (0:00:01.037) 0:04:55.484 ***** 2026-02-19 03:19:08.156147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156250 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:08.156268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156344 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:08.156364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-19 03:19:08.156441 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:08.156460 | orchestrator | 2026-02-19 03:19:08.156494 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-19 03:19:08.156512 | orchestrator | Thursday 19 February 2026 03:19:03 +0000 (0:00:00.950) 0:04:56.435 ***** 2026-02-19 03:19:08.156524 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:19:08.156535 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:19:08.156546 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:19:08.156557 | orchestrator | 2026-02-19 03:19:08.156568 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-19 03:19:08.156579 | orchestrator | Thursday 19 February 2026 03:19:04 +0000 (0:00:01.327) 0:04:57.762 ***** 2026-02-19 03:19:08.156590 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:19:08.156604 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:19:08.156622 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:19:08.156640 | orchestrator | 2026-02-19 03:19:08.156658 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-19 03:19:08.156678 | orchestrator | Thursday 19 February 2026 03:19:06 +0000 (0:00:02.217) 0:04:59.980 ***** 2026-02-19 03:19:08.156697 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:08.156717 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:08.156729 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:08.156740 | orchestrator | 2026-02-19 03:19:08.156751 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-19 03:19:08.156761 | orchestrator | Thursday 19 February 2026 03:19:07 +0000 (0:00:00.663) 0:05:00.643 ***** 2026-02-19 03:19:08.156772 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:08.156783 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:08.156794 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:08.156811 | orchestrator | 2026-02-19 03:19:08.156872 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-19 03:19:08.156893 | orchestrator | Thursday 19 February 2026 03:19:07 +0000 (0:00:00.316) 0:05:00.960 ***** 2026-02-19 03:19:08.156911 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:08.156944 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.381414 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.381501 | orchestrator | 2026-02-19 03:19:50.381509 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-19 03:19:50.381515 | orchestrator | Thursday 19 February 2026 03:19:08 +0000 (0:00:00.363) 0:05:01.323 ***** 2026-02-19 03:19:50.381520 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.381524 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.381529 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.381533 | orchestrator | 2026-02-19 03:19:50.381538 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-19 03:19:50.381542 | orchestrator | Thursday 19 February 2026 03:19:08 +0000 (0:00:00.365) 0:05:01.689 ***** 2026-02-19 03:19:50.381547 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.381551 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.381556 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.381560 | orchestrator | 2026-02-19 03:19:50.381564 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-19 03:19:50.381580 | orchestrator | Thursday 19 February 2026 03:19:09 +0000 (0:00:00.649) 0:05:02.338 ***** 2026-02-19 03:19:50.381585 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.381589 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.381594 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.381598 | orchestrator | 2026-02-19 03:19:50.381602 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-19 03:19:50.381607 | orchestrator | Thursday 19 February 2026 03:19:09 +0000 (0:00:00.541) 0:05:02.880 ***** 2026-02-19 03:19:50.381611 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.381625 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.381630 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.381634 | orchestrator | 2026-02-19 03:19:50.381638 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-19 03:19:50.381657 | orchestrator | Thursday 19 February 2026 03:19:10 +0000 (0:00:00.676) 0:05:03.557 ***** 2026-02-19 03:19:50.381668 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.381672 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.381676 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.381681 | orchestrator | 2026-02-19 03:19:50.381685 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-19 03:19:50.381689 | orchestrator | Thursday 19 February 2026 03:19:11 +0000 (0:00:00.670) 0:05:04.227 ***** 2026-02-19 03:19:50.381693 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.381698 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.381702 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.381706 | orchestrator | 2026-02-19 03:19:50.381710 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-19 03:19:50.381714 | orchestrator | Thursday 19 February 2026 03:19:12 +0000 (0:00:01.062) 0:05:05.289 ***** 2026-02-19 03:19:50.381719 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.381723 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.381727 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.381731 | orchestrator | 2026-02-19 03:19:50.381736 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-19 03:19:50.381740 | orchestrator | Thursday 19 February 2026 03:19:13 +0000 (0:00:00.916) 0:05:06.206 ***** 2026-02-19 03:19:50.381744 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.381748 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.381753 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.381757 | orchestrator | 2026-02-19 03:19:50.381761 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-19 03:19:50.381765 | orchestrator | Thursday 19 February 2026 03:19:14 +0000 (0:00:01.000) 0:05:07.207 ***** 2026-02-19 03:19:50.381770 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:19:50.381774 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:19:50.381778 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:19:50.381783 | orchestrator | 2026-02-19 03:19:50.381787 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-19 03:19:50.381791 | orchestrator | Thursday 19 February 2026 03:19:18 +0000 (0:00:04.569) 0:05:11.776 ***** 2026-02-19 03:19:50.381795 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.381800 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.381804 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.381856 | orchestrator | 2026-02-19 03:19:50.381865 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-19 03:19:50.381872 | orchestrator | Thursday 19 February 2026 03:19:21 +0000 (0:00:03.135) 0:05:14.911 ***** 2026-02-19 03:19:50.381880 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:19:50.381884 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:19:50.381888 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:19:50.381893 | orchestrator | 2026-02-19 03:19:50.381897 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-19 03:19:50.381902 | orchestrator | Thursday 19 February 2026 03:19:36 +0000 (0:00:14.683) 0:05:29.595 ***** 2026-02-19 03:19:50.381906 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.381910 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.381915 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.381919 | orchestrator | 2026-02-19 03:19:50.381923 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-19 03:19:50.381927 | orchestrator | Thursday 19 February 2026 03:19:37 +0000 (0:00:00.771) 0:05:30.367 ***** 2026-02-19 03:19:50.381932 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:19:50.381936 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:19:50.381940 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:19:50.381944 | orchestrator | 2026-02-19 03:19:50.381949 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-19 03:19:50.381954 | orchestrator | Thursday 19 February 2026 03:19:41 +0000 (0:00:04.129) 0:05:34.497 ***** 2026-02-19 03:19:50.381967 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.381972 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.381977 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.381981 | orchestrator | 2026-02-19 03:19:50.381986 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-19 03:19:50.381991 | orchestrator | Thursday 19 February 2026 03:19:41 +0000 (0:00:00.633) 0:05:35.131 ***** 2026-02-19 03:19:50.381996 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.382001 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.382006 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.382010 | orchestrator | 2026-02-19 03:19:50.382065 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-19 03:19:50.382071 | orchestrator | Thursday 19 February 2026 03:19:42 +0000 (0:00:00.347) 0:05:35.479 ***** 2026-02-19 03:19:50.382076 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.382081 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.382086 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.382093 | orchestrator | 2026-02-19 03:19:50.382100 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-19 03:19:50.382107 | orchestrator | Thursday 19 February 2026 03:19:42 +0000 (0:00:00.333) 0:05:35.812 ***** 2026-02-19 03:19:50.382114 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.382121 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.382127 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.382135 | orchestrator | 2026-02-19 03:19:50.382142 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-19 03:19:50.382149 | orchestrator | Thursday 19 February 2026 03:19:42 +0000 (0:00:00.330) 0:05:36.143 ***** 2026-02-19 03:19:50.382155 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.382167 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.382172 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.382176 | orchestrator | 2026-02-19 03:19:50.382181 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-19 03:19:50.382185 | orchestrator | Thursday 19 February 2026 03:19:43 +0000 (0:00:00.632) 0:05:36.775 ***** 2026-02-19 03:19:50.382189 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:19:50.382196 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:19:50.382203 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:19:50.382210 | orchestrator | 2026-02-19 03:19:50.382216 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-19 03:19:50.382223 | orchestrator | Thursday 19 February 2026 03:19:43 +0000 (0:00:00.330) 0:05:37.106 ***** 2026-02-19 03:19:50.382227 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.382231 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.382236 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.382240 | orchestrator | 2026-02-19 03:19:50.382244 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-19 03:19:50.382248 | orchestrator | Thursday 19 February 2026 03:19:48 +0000 (0:00:04.794) 0:05:41.900 ***** 2026-02-19 03:19:50.382253 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:19:50.382257 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:19:50.382261 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:19:50.382265 | orchestrator | 2026-02-19 03:19:50.382270 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:19:50.382275 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-19 03:19:50.382281 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-19 03:19:50.382285 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-19 03:19:50.382289 | orchestrator | 2026-02-19 03:19:50.382294 | orchestrator | 2026-02-19 03:19:50.382313 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:19:50.382318 | orchestrator | Thursday 19 February 2026 03:19:49 +0000 (0:00:00.864) 0:05:42.764 ***** 2026-02-19 03:19:50.382323 | orchestrator | =============================================================================== 2026-02-19 03:19:50.382327 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.68s 2026-02-19 03:19:50.382331 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.30s 2026-02-19 03:19:50.382336 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.25s 2026-02-19 03:19:50.382340 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.79s 2026-02-19 03:19:50.382344 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.57s 2026-02-19 03:19:50.382348 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.13s 2026-02-19 03:19:50.382353 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.10s 2026-02-19 03:19:50.382357 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.06s 2026-02-19 03:19:50.382361 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.94s 2026-02-19 03:19:50.382365 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.87s 2026-02-19 03:19:50.382370 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.59s 2026-02-19 03:19:50.382374 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.52s 2026-02-19 03:19:50.382378 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.49s 2026-02-19 03:19:50.382382 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.46s 2026-02-19 03:19:50.382387 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.41s 2026-02-19 03:19:50.382391 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.39s 2026-02-19 03:19:50.382395 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.35s 2026-02-19 03:19:50.382399 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.33s 2026-02-19 03:19:50.382404 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.29s 2026-02-19 03:19:50.382408 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.28s 2026-02-19 03:19:52.706656 | orchestrator | 2026-02-19 03:19:52 | INFO  | Task 11847af2-8392-4d8a-a5ce-48ce5e471c27 (opensearch) was prepared for execution. 2026-02-19 03:19:52.706793 | orchestrator | 2026-02-19 03:19:52 | INFO  | It takes a moment until task 11847af2-8392-4d8a-a5ce-48ce5e471c27 (opensearch) has been started and output is visible here. 2026-02-19 03:20:03.421972 | orchestrator | 2026-02-19 03:20:03.422172 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:20:03.422192 | orchestrator | 2026-02-19 03:20:03.422201 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:20:03.422209 | orchestrator | Thursday 19 February 2026 03:19:56 +0000 (0:00:00.254) 0:00:00.254 ***** 2026-02-19 03:20:03.422218 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:20:03.422227 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:20:03.422235 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:20:03.422245 | orchestrator | 2026-02-19 03:20:03.422300 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:20:03.422310 | orchestrator | Thursday 19 February 2026 03:19:57 +0000 (0:00:00.284) 0:00:00.538 ***** 2026-02-19 03:20:03.422334 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-19 03:20:03.422343 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-19 03:20:03.422351 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-19 03:20:03.422360 | orchestrator | 2026-02-19 03:20:03.422368 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-19 03:20:03.422396 | orchestrator | 2026-02-19 03:20:03.422405 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-19 03:20:03.422413 | orchestrator | Thursday 19 February 2026 03:19:57 +0000 (0:00:00.431) 0:00:00.970 ***** 2026-02-19 03:20:03.422421 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:20:03.422429 | orchestrator | 2026-02-19 03:20:03.422437 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-19 03:20:03.422447 | orchestrator | Thursday 19 February 2026 03:19:58 +0000 (0:00:00.476) 0:00:01.447 ***** 2026-02-19 03:20:03.422457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 03:20:03.422466 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 03:20:03.422476 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 03:20:03.422486 | orchestrator | 2026-02-19 03:20:03.422495 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-19 03:20:03.422504 | orchestrator | Thursday 19 February 2026 03:19:58 +0000 (0:00:00.637) 0:00:02.084 ***** 2026-02-19 03:20:03.422516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:03.422530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:03.422556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:03.422574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:03.422593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:03.422605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:03.422615 | orchestrator | 2026-02-19 03:20:03.422624 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-19 03:20:03.422634 | orchestrator | Thursday 19 February 2026 03:20:00 +0000 (0:00:01.784) 0:00:03.869 ***** 2026-02-19 03:20:03.422643 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:20:03.422652 | orchestrator | 2026-02-19 03:20:03.422662 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-19 03:20:03.422671 | orchestrator | Thursday 19 February 2026 03:20:01 +0000 (0:00:00.550) 0:00:04.420 ***** 2026-02-19 03:20:03.422691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:04.265623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:04.265708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:04.265720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:04.265730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:04.265801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:04.265923 | orchestrator | 2026-02-19 03:20:04.265937 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-19 03:20:04.265950 | orchestrator | Thursday 19 February 2026 03:20:03 +0000 (0:00:02.398) 0:00:06.818 ***** 2026-02-19 03:20:04.265964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:20:04.265972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:20:04.265980 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:20:04.265989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:20:04.266061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:20:05.271418 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:20:05.271552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:20:05.271587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:20:05.271612 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:20:05.271633 | orchestrator | 2026-02-19 03:20:05.271655 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-19 03:20:05.271676 | orchestrator | Thursday 19 February 2026 03:20:04 +0000 (0:00:00.848) 0:00:07.666 ***** 2026-02-19 03:20:05.271729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:20:05.271774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:20:05.271853 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:20:05.271876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:20:05.271896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:20:05.271917 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:20:05.271949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-19 03:20:05.271979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-19 03:20:05.271998 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:20:05.272015 | orchestrator | 2026-02-19 03:20:05.272035 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-19 03:20:05.272069 | orchestrator | Thursday 19 February 2026 03:20:05 +0000 (0:00:00.991) 0:00:08.658 ***** 2026-02-19 03:20:13.380742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:13.380917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:13.380940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:13.380993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:13.381032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:13.381044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:20:13.381059 | orchestrator | 2026-02-19 03:20:13.381069 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-19 03:20:13.381078 | orchestrator | Thursday 19 February 2026 03:20:07 +0000 (0:00:02.397) 0:00:11.056 ***** 2026-02-19 03:20:13.381086 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:20:13.381094 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:20:13.381101 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:20:13.381109 | orchestrator | 2026-02-19 03:20:13.381116 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-19 03:20:13.381123 | orchestrator | Thursday 19 February 2026 03:20:09 +0000 (0:00:02.215) 0:00:13.271 ***** 2026-02-19 03:20:13.381130 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:20:13.381137 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:20:13.381144 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:20:13.381151 | orchestrator | 2026-02-19 03:20:13.381159 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-19 03:20:13.381166 | orchestrator | Thursday 19 February 2026 03:20:11 +0000 (0:00:01.770) 0:00:15.042 ***** 2026-02-19 03:20:13.381174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:13.381186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:20:13.381200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-19 03:22:50.814769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:22:50.814983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:22:50.815019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-19 03:22:50.815033 | orchestrator | 2026-02-19 03:22:50.815046 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-19 03:22:50.815058 | orchestrator | Thursday 19 February 2026 03:20:13 +0000 (0:00:01.738) 0:00:16.780 ***** 2026-02-19 03:22:50.815069 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:22:50.815099 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:22:50.815110 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:22:50.815121 | orchestrator | 2026-02-19 03:22:50.815133 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-19 03:22:50.815144 | orchestrator | Thursday 19 February 2026 03:20:13 +0000 (0:00:00.296) 0:00:17.076 ***** 2026-02-19 03:22:50.815155 | orchestrator | 2026-02-19 03:22:50.815166 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-19 03:22:50.815176 | orchestrator | Thursday 19 February 2026 03:20:13 +0000 (0:00:00.060) 0:00:17.137 ***** 2026-02-19 03:22:50.815187 | orchestrator | 2026-02-19 03:22:50.815197 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-19 03:22:50.815217 | orchestrator | Thursday 19 February 2026 03:20:13 +0000 (0:00:00.065) 0:00:17.202 ***** 2026-02-19 03:22:50.815228 | orchestrator | 2026-02-19 03:22:50.815238 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-19 03:22:50.815269 | orchestrator | Thursday 19 February 2026 03:20:13 +0000 (0:00:00.064) 0:00:17.266 ***** 2026-02-19 03:22:50.815282 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:22:50.815295 | orchestrator | 2026-02-19 03:22:50.815307 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-19 03:22:50.815319 | orchestrator | Thursday 19 February 2026 03:20:14 +0000 (0:00:00.201) 0:00:17.468 ***** 2026-02-19 03:22:50.815331 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:22:50.815344 | orchestrator | 2026-02-19 03:22:50.815357 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-19 03:22:50.815369 | orchestrator | Thursday 19 February 2026 03:20:14 +0000 (0:00:00.633) 0:00:18.102 ***** 2026-02-19 03:22:50.815382 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:22:50.815394 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:22:50.815406 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:22:50.815418 | orchestrator | 2026-02-19 03:22:50.815430 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-19 03:22:50.815443 | orchestrator | Thursday 19 February 2026 03:21:20 +0000 (0:01:05.666) 0:01:23.768 ***** 2026-02-19 03:22:50.815455 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:22:50.815467 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:22:50.815480 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:22:50.815492 | orchestrator | 2026-02-19 03:22:50.815504 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-19 03:22:50.815517 | orchestrator | Thursday 19 February 2026 03:22:39 +0000 (0:01:18.976) 0:02:42.745 ***** 2026-02-19 03:22:50.815529 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:22:50.815542 | orchestrator | 2026-02-19 03:22:50.815554 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-19 03:22:50.815566 | orchestrator | Thursday 19 February 2026 03:22:39 +0000 (0:00:00.494) 0:02:43.240 ***** 2026-02-19 03:22:50.815579 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:22:50.815592 | orchestrator | 2026-02-19 03:22:50.815604 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-19 03:22:50.815616 | orchestrator | Thursday 19 February 2026 03:22:42 +0000 (0:00:02.986) 0:02:46.227 ***** 2026-02-19 03:22:50.815628 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:22:50.815641 | orchestrator | 2026-02-19 03:22:50.815651 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-19 03:22:50.815662 | orchestrator | Thursday 19 February 2026 03:22:45 +0000 (0:00:02.355) 0:02:48.582 ***** 2026-02-19 03:22:50.815673 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:22:50.815683 | orchestrator | 2026-02-19 03:22:50.815694 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-19 03:22:50.815705 | orchestrator | Thursday 19 February 2026 03:22:47 +0000 (0:00:02.809) 0:02:51.391 ***** 2026-02-19 03:22:50.815715 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:22:50.815726 | orchestrator | 2026-02-19 03:22:50.815737 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:22:50.815748 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 03:22:50.815760 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 03:22:50.815778 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 03:22:50.815789 | orchestrator | 2026-02-19 03:22:50.815800 | orchestrator | 2026-02-19 03:22:50.815839 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:22:50.815851 | orchestrator | Thursday 19 February 2026 03:22:50 +0000 (0:00:02.797) 0:02:54.189 ***** 2026-02-19 03:22:50.815861 | orchestrator | =============================================================================== 2026-02-19 03:22:50.815872 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.98s 2026-02-19 03:22:50.815883 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.67s 2026-02-19 03:22:50.815893 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.99s 2026-02-19 03:22:50.815904 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.81s 2026-02-19 03:22:50.815915 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.80s 2026-02-19 03:22:50.815925 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.40s 2026-02-19 03:22:50.815936 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.40s 2026-02-19 03:22:50.815946 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.36s 2026-02-19 03:22:50.815957 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.22s 2026-02-19 03:22:50.815968 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.78s 2026-02-19 03:22:50.815978 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.77s 2026-02-19 03:22:50.815989 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.74s 2026-02-19 03:22:50.815999 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.99s 2026-02-19 03:22:50.816010 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.85s 2026-02-19 03:22:50.816021 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2026-02-19 03:22:50.816032 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.63s 2026-02-19 03:22:50.816049 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-02-19 03:22:51.152347 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-02-19 03:22:51.152435 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-02-19 03:22:51.152444 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-02-19 03:22:53.515259 | orchestrator | 2026-02-19 03:22:53 | INFO  | Task 8da8178d-8849-41ad-aa9a-d54112e3638b (memcached) was prepared for execution. 2026-02-19 03:22:53.515329 | orchestrator | 2026-02-19 03:22:53 | INFO  | It takes a moment until task 8da8178d-8849-41ad-aa9a-d54112e3638b (memcached) has been started and output is visible here. 2026-02-19 03:23:05.066610 | orchestrator | 2026-02-19 03:23:05.066721 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:23:05.066739 | orchestrator | 2026-02-19 03:23:05.066752 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:23:05.066765 | orchestrator | Thursday 19 February 2026 03:22:57 +0000 (0:00:00.188) 0:00:00.188 ***** 2026-02-19 03:23:05.066777 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:23:05.066790 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:23:05.066801 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:23:05.066909 | orchestrator | 2026-02-19 03:23:05.066922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:23:05.066936 | orchestrator | Thursday 19 February 2026 03:22:57 +0000 (0:00:00.241) 0:00:00.430 ***** 2026-02-19 03:23:05.066949 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-19 03:23:05.066963 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-19 03:23:05.066975 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-19 03:23:05.066988 | orchestrator | 2026-02-19 03:23:05.067001 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-19 03:23:05.067044 | orchestrator | 2026-02-19 03:23:05.067056 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-19 03:23:05.067068 | orchestrator | Thursday 19 February 2026 03:22:58 +0000 (0:00:00.351) 0:00:00.782 ***** 2026-02-19 03:23:05.067081 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:23:05.067094 | orchestrator | 2026-02-19 03:23:05.067106 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-19 03:23:05.067118 | orchestrator | Thursday 19 February 2026 03:22:58 +0000 (0:00:00.439) 0:00:01.221 ***** 2026-02-19 03:23:05.067130 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-19 03:23:05.067142 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-19 03:23:05.067154 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-19 03:23:05.067166 | orchestrator | 2026-02-19 03:23:05.067179 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-19 03:23:05.067191 | orchestrator | Thursday 19 February 2026 03:22:59 +0000 (0:00:00.630) 0:00:01.852 ***** 2026-02-19 03:23:05.067204 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-19 03:23:05.067217 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-19 03:23:05.067229 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-19 03:23:05.067242 | orchestrator | 2026-02-19 03:23:05.067252 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-19 03:23:05.067260 | orchestrator | Thursday 19 February 2026 03:23:00 +0000 (0:00:01.696) 0:00:03.549 ***** 2026-02-19 03:23:05.067282 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:23:05.067290 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:23:05.067298 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:23:05.067306 | orchestrator | 2026-02-19 03:23:05.067314 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-19 03:23:05.067322 | orchestrator | Thursday 19 February 2026 03:23:02 +0000 (0:00:01.538) 0:00:05.087 ***** 2026-02-19 03:23:05.067329 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:23:05.067337 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:23:05.067345 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:23:05.067353 | orchestrator | 2026-02-19 03:23:05.067361 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:23:05.067369 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:23:05.067378 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:23:05.067386 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:23:05.067394 | orchestrator | 2026-02-19 03:23:05.067402 | orchestrator | 2026-02-19 03:23:05.067410 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:23:05.067418 | orchestrator | Thursday 19 February 2026 03:23:04 +0000 (0:00:02.208) 0:00:07.296 ***** 2026-02-19 03:23:05.067426 | orchestrator | =============================================================================== 2026-02-19 03:23:05.067434 | orchestrator | memcached : Restart memcached container --------------------------------- 2.21s 2026-02-19 03:23:05.067441 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.70s 2026-02-19 03:23:05.067449 | orchestrator | memcached : Check memcached container ----------------------------------- 1.54s 2026-02-19 03:23:05.067457 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.63s 2026-02-19 03:23:05.067465 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.44s 2026-02-19 03:23:05.067473 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-02-19 03:23:05.067481 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2026-02-19 03:23:07.521357 | orchestrator | 2026-02-19 03:23:07 | INFO  | Task dfa69b7c-3a8e-468a-b6a7-7ea5991113db (redis) was prepared for execution. 2026-02-19 03:23:07.521464 | orchestrator | 2026-02-19 03:23:07 | INFO  | It takes a moment until task dfa69b7c-3a8e-468a-b6a7-7ea5991113db (redis) has been started and output is visible here. 2026-02-19 03:23:16.554763 | orchestrator | 2026-02-19 03:23:16.554946 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:23:16.554968 | orchestrator | 2026-02-19 03:23:16.554980 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:23:16.554992 | orchestrator | Thursday 19 February 2026 03:23:11 +0000 (0:00:00.254) 0:00:00.254 ***** 2026-02-19 03:23:16.555003 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:23:16.555015 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:23:16.555026 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:23:16.555037 | orchestrator | 2026-02-19 03:23:16.555048 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:23:16.555059 | orchestrator | Thursday 19 February 2026 03:23:12 +0000 (0:00:00.319) 0:00:00.574 ***** 2026-02-19 03:23:16.555070 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-19 03:23:16.555082 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-19 03:23:16.555094 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-19 03:23:16.555117 | orchestrator | 2026-02-19 03:23:16.555144 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-19 03:23:16.555162 | orchestrator | 2026-02-19 03:23:16.555179 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-19 03:23:16.555197 | orchestrator | Thursday 19 February 2026 03:23:12 +0000 (0:00:00.419) 0:00:00.994 ***** 2026-02-19 03:23:16.555215 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:23:16.555232 | orchestrator | 2026-02-19 03:23:16.555250 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-19 03:23:16.555269 | orchestrator | Thursday 19 February 2026 03:23:12 +0000 (0:00:00.470) 0:00:01.464 ***** 2026-02-19 03:23:16.555294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555443 | orchestrator | 2026-02-19 03:23:16.555455 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-19 03:23:16.555466 | orchestrator | Thursday 19 February 2026 03:23:14 +0000 (0:00:01.107) 0:00:02.572 ***** 2026-02-19 03:23:16.555477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:16.555653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806279 | orchestrator | 2026-02-19 03:23:20.806297 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-19 03:23:20.806310 | orchestrator | Thursday 19 February 2026 03:23:16 +0000 (0:00:02.465) 0:00:05.038 ***** 2026-02-19 03:23:20.806323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806507 | orchestrator | 2026-02-19 03:23:20.806527 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-19 03:23:20.806547 | orchestrator | Thursday 19 February 2026 03:23:18 +0000 (0:00:02.452) 0:00:07.490 ***** 2026-02-19 03:23:20.806561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:20.806646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 03:23:35.925433 | orchestrator | 2026-02-19 03:23:35.925524 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-19 03:23:35.925536 | orchestrator | Thursday 19 February 2026 03:23:20 +0000 (0:00:01.609) 0:00:09.099 ***** 2026-02-19 03:23:35.925543 | orchestrator | 2026-02-19 03:23:35.925550 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-19 03:23:35.925557 | orchestrator | Thursday 19 February 2026 03:23:20 +0000 (0:00:00.064) 0:00:09.164 ***** 2026-02-19 03:23:35.925564 | orchestrator | 2026-02-19 03:23:35.925571 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-19 03:23:35.925578 | orchestrator | Thursday 19 February 2026 03:23:20 +0000 (0:00:00.062) 0:00:09.226 ***** 2026-02-19 03:23:35.925584 | orchestrator | 2026-02-19 03:23:35.925591 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-19 03:23:35.925598 | orchestrator | Thursday 19 February 2026 03:23:20 +0000 (0:00:00.062) 0:00:09.289 ***** 2026-02-19 03:23:35.925604 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:23:35.925612 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:23:35.925619 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:23:35.925625 | orchestrator | 2026-02-19 03:23:35.925632 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-19 03:23:35.925639 | orchestrator | Thursday 19 February 2026 03:23:28 +0000 (0:00:07.764) 0:00:17.053 ***** 2026-02-19 03:23:35.925664 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:23:35.925671 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:23:35.925678 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:23:35.925685 | orchestrator | 2026-02-19 03:23:35.925692 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:23:35.925698 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:23:35.925707 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:23:35.925726 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:23:35.925733 | orchestrator | 2026-02-19 03:23:35.925740 | orchestrator | 2026-02-19 03:23:35.925746 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:23:35.925753 | orchestrator | Thursday 19 February 2026 03:23:35 +0000 (0:00:07.027) 0:00:24.080 ***** 2026-02-19 03:23:35.925759 | orchestrator | =============================================================================== 2026-02-19 03:23:35.925766 | orchestrator | redis : Restart redis container ----------------------------------------- 7.76s 2026-02-19 03:23:35.925773 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.03s 2026-02-19 03:23:35.925779 | orchestrator | redis : Copying over default config.json files -------------------------- 2.47s 2026-02-19 03:23:35.925786 | orchestrator | redis : Copying over redis config files --------------------------------- 2.45s 2026-02-19 03:23:35.925793 | orchestrator | redis : Check redis containers ------------------------------------------ 1.61s 2026-02-19 03:23:35.925799 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.11s 2026-02-19 03:23:35.925806 | orchestrator | redis : include_tasks --------------------------------------------------- 0.47s 2026-02-19 03:23:35.925817 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-02-19 03:23:35.925885 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-19 03:23:35.925896 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2026-02-19 03:23:38.393082 | orchestrator | 2026-02-19 03:23:38 | INFO  | Task a6b0a78d-55c8-411e-aa82-0f57fda2d56f (mariadb) was prepared for execution. 2026-02-19 03:23:38.393178 | orchestrator | 2026-02-19 03:23:38 | INFO  | It takes a moment until task a6b0a78d-55c8-411e-aa82-0f57fda2d56f (mariadb) has been started and output is visible here. 2026-02-19 03:23:51.959722 | orchestrator | 2026-02-19 03:23:51.959930 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:23:51.959948 | orchestrator | 2026-02-19 03:23:51.959957 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:23:51.959965 | orchestrator | Thursday 19 February 2026 03:23:42 +0000 (0:00:00.163) 0:00:00.163 ***** 2026-02-19 03:23:51.959974 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:23:51.959983 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:23:51.959991 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:23:51.959999 | orchestrator | 2026-02-19 03:23:51.960007 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:23:51.960016 | orchestrator | Thursday 19 February 2026 03:23:42 +0000 (0:00:00.332) 0:00:00.496 ***** 2026-02-19 03:23:51.960024 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-19 03:23:51.960032 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-19 03:23:51.960040 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-19 03:23:51.960048 | orchestrator | 2026-02-19 03:23:51.960056 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-19 03:23:51.960064 | orchestrator | 2026-02-19 03:23:51.960072 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-19 03:23:51.960100 | orchestrator | Thursday 19 February 2026 03:23:43 +0000 (0:00:00.534) 0:00:01.030 ***** 2026-02-19 03:23:51.960108 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 03:23:51.960116 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 03:23:51.960124 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 03:23:51.960132 | orchestrator | 2026-02-19 03:23:51.960140 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 03:23:51.960148 | orchestrator | Thursday 19 February 2026 03:23:43 +0000 (0:00:00.364) 0:00:01.395 ***** 2026-02-19 03:23:51.960156 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:23:51.960164 | orchestrator | 2026-02-19 03:23:51.960172 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-19 03:23:51.960180 | orchestrator | Thursday 19 February 2026 03:23:44 +0000 (0:00:00.522) 0:00:01.918 ***** 2026-02-19 03:23:51.960207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:23:51.960237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:23:51.960258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:23:51.960268 | orchestrator | 2026-02-19 03:23:51.960278 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-19 03:23:51.960288 | orchestrator | Thursday 19 February 2026 03:23:46 +0000 (0:00:02.574) 0:00:04.492 ***** 2026-02-19 03:23:51.960297 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:23:51.960306 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:23:51.960316 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:23:51.960324 | orchestrator | 2026-02-19 03:23:51.960333 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-19 03:23:51.960342 | orchestrator | Thursday 19 February 2026 03:23:47 +0000 (0:00:00.670) 0:00:05.163 ***** 2026-02-19 03:23:51.960351 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:23:51.960359 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:23:51.960368 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:23:51.960377 | orchestrator | 2026-02-19 03:23:51.960386 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-19 03:23:51.960395 | orchestrator | Thursday 19 February 2026 03:23:49 +0000 (0:00:01.529) 0:00:06.692 ***** 2026-02-19 03:23:51.960416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:23:59.221080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:23:59.221230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:23:59.221268 | orchestrator | 2026-02-19 03:23:59.221283 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-19 03:23:59.221296 | orchestrator | Thursday 19 February 2026 03:23:51 +0000 (0:00:02.943) 0:00:09.636 ***** 2026-02-19 03:23:59.221308 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:23:59.221321 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:23:59.221331 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:23:59.221342 | orchestrator | 2026-02-19 03:23:59.221353 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-19 03:23:59.221382 | orchestrator | Thursday 19 February 2026 03:23:52 +0000 (0:00:01.047) 0:00:10.683 ***** 2026-02-19 03:23:59.221394 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:23:59.221404 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:23:59.221415 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:23:59.221426 | orchestrator | 2026-02-19 03:23:59.221437 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 03:23:59.221448 | orchestrator | Thursday 19 February 2026 03:23:56 +0000 (0:00:03.554) 0:00:14.238 ***** 2026-02-19 03:23:59.221459 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:23:59.221470 | orchestrator | 2026-02-19 03:23:59.221483 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-19 03:23:59.221496 | orchestrator | Thursday 19 February 2026 03:23:57 +0000 (0:00:00.493) 0:00:14.731 ***** 2026-02-19 03:23:59.221517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:23:59.221545 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:23:59.221586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:24:03.910493 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:24:03.910630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:24:03.910687 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:24:03.910708 | orchestrator | 2026-02-19 03:24:03.910728 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-19 03:24:03.910746 | orchestrator | Thursday 19 February 2026 03:23:59 +0000 (0:00:02.168) 0:00:16.899 ***** 2026-02-19 03:24:03.910766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:24:03.910783 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:24:03.910951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:24:03.910990 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:24:03.911008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:24:03.911025 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:24:03.911042 | orchestrator | 2026-02-19 03:24:03.911060 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-19 03:24:03.911077 | orchestrator | Thursday 19 February 2026 03:24:01 +0000 (0:00:02.527) 0:00:19.427 ***** 2026-02-19 03:24:03.911118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:24:06.540563 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:24:06.540686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:24:06.540713 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:24:06.540753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 03:24:06.540804 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:24:06.540823 | orchestrator | 2026-02-19 03:24:06.540902 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-19 03:24:06.540920 | orchestrator | Thursday 19 February 2026 03:24:03 +0000 (0:00:02.163) 0:00:21.590 ***** 2026-02-19 03:24:06.540964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:24:06.540985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:24:06.541023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 03:26:17.050433 | orchestrator | 2026-02-19 03:26:17.050543 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-19 03:26:17.050558 | orchestrator | Thursday 19 February 2026 03:24:06 +0000 (0:00:02.630) 0:00:24.221 ***** 2026-02-19 03:26:17.050569 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:17.050580 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:26:17.050590 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:26:17.050600 | orchestrator | 2026-02-19 03:26:17.050610 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-19 03:26:17.050620 | orchestrator | Thursday 19 February 2026 03:24:07 +0000 (0:00:00.855) 0:00:25.076 ***** 2026-02-19 03:26:17.050630 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.050641 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:26:17.050651 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:26:17.050660 | orchestrator | 2026-02-19 03:26:17.050670 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-19 03:26:17.050680 | orchestrator | Thursday 19 February 2026 03:24:07 +0000 (0:00:00.501) 0:00:25.578 ***** 2026-02-19 03:26:17.050689 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.050699 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:26:17.050708 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:26:17.050718 | orchestrator | 2026-02-19 03:26:17.050728 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-19 03:26:17.050737 | orchestrator | Thursday 19 February 2026 03:24:08 +0000 (0:00:00.323) 0:00:25.902 ***** 2026-02-19 03:26:17.050748 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-19 03:26:17.050759 | orchestrator | ...ignoring 2026-02-19 03:26:17.050769 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-19 03:26:17.050779 | orchestrator | ...ignoring 2026-02-19 03:26:17.050788 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-19 03:26:17.050798 | orchestrator | ...ignoring 2026-02-19 03:26:17.050831 | orchestrator | 2026-02-19 03:26:17.050841 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-19 03:26:17.050851 | orchestrator | Thursday 19 February 2026 03:24:19 +0000 (0:00:10.833) 0:00:36.736 ***** 2026-02-19 03:26:17.050861 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.050870 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:26:17.050879 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:26:17.050889 | orchestrator | 2026-02-19 03:26:17.050898 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-19 03:26:17.050944 | orchestrator | Thursday 19 February 2026 03:24:19 +0000 (0:00:00.398) 0:00:37.135 ***** 2026-02-19 03:26:17.050961 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:17.050979 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:17.050997 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:17.051014 | orchestrator | 2026-02-19 03:26:17.051030 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-19 03:26:17.051042 | orchestrator | Thursday 19 February 2026 03:24:20 +0000 (0:00:00.659) 0:00:37.795 ***** 2026-02-19 03:26:17.051053 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:17.051065 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:17.051076 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:17.051087 | orchestrator | 2026-02-19 03:26:17.051112 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-19 03:26:17.051124 | orchestrator | Thursday 19 February 2026 03:24:20 +0000 (0:00:00.416) 0:00:38.211 ***** 2026-02-19 03:26:17.051135 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:17.051146 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:17.051158 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:17.051169 | orchestrator | 2026-02-19 03:26:17.051182 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-19 03:26:17.051194 | orchestrator | Thursday 19 February 2026 03:24:20 +0000 (0:00:00.421) 0:00:38.632 ***** 2026-02-19 03:26:17.051205 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.051216 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:26:17.051226 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:26:17.051235 | orchestrator | 2026-02-19 03:26:17.051245 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-19 03:26:17.051255 | orchestrator | Thursday 19 February 2026 03:24:21 +0000 (0:00:00.412) 0:00:39.044 ***** 2026-02-19 03:26:17.051264 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:17.051274 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:17.051283 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:17.051293 | orchestrator | 2026-02-19 03:26:17.051302 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 03:26:17.051311 | orchestrator | Thursday 19 February 2026 03:24:22 +0000 (0:00:00.658) 0:00:39.702 ***** 2026-02-19 03:26:17.051321 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:17.051330 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:17.051340 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-19 03:26:17.051349 | orchestrator | 2026-02-19 03:26:17.051359 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-19 03:26:17.051368 | orchestrator | Thursday 19 February 2026 03:24:22 +0000 (0:00:00.373) 0:00:40.076 ***** 2026-02-19 03:26:17.051378 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:17.051387 | orchestrator | 2026-02-19 03:26:17.051397 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-19 03:26:17.051406 | orchestrator | Thursday 19 February 2026 03:24:32 +0000 (0:00:10.117) 0:00:50.194 ***** 2026-02-19 03:26:17.051421 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.051435 | orchestrator | 2026-02-19 03:26:17.051460 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 03:26:17.051479 | orchestrator | Thursday 19 February 2026 03:24:32 +0000 (0:00:00.116) 0:00:50.310 ***** 2026-02-19 03:26:17.051494 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:17.051543 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:17.051561 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:17.051578 | orchestrator | 2026-02-19 03:26:17.051593 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-19 03:26:17.051608 | orchestrator | Thursday 19 February 2026 03:24:33 +0000 (0:00:00.990) 0:00:51.300 ***** 2026-02-19 03:26:17.051623 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:17.051640 | orchestrator | 2026-02-19 03:26:17.051656 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-19 03:26:17.051672 | orchestrator | Thursday 19 February 2026 03:24:41 +0000 (0:00:07.630) 0:00:58.931 ***** 2026-02-19 03:26:17.051688 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.051704 | orchestrator | 2026-02-19 03:26:17.051714 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-19 03:26:17.051724 | orchestrator | Thursday 19 February 2026 03:24:42 +0000 (0:00:01.686) 0:01:00.617 ***** 2026-02-19 03:26:17.051733 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.051743 | orchestrator | 2026-02-19 03:26:17.051752 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-19 03:26:17.051762 | orchestrator | Thursday 19 February 2026 03:24:45 +0000 (0:00:02.520) 0:01:03.137 ***** 2026-02-19 03:26:17.051772 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:17.051781 | orchestrator | 2026-02-19 03:26:17.051791 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-19 03:26:17.051800 | orchestrator | Thursday 19 February 2026 03:24:45 +0000 (0:00:00.122) 0:01:03.259 ***** 2026-02-19 03:26:17.051810 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:17.051820 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:17.051829 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:17.051839 | orchestrator | 2026-02-19 03:26:17.051848 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-19 03:26:17.051858 | orchestrator | Thursday 19 February 2026 03:24:45 +0000 (0:00:00.316) 0:01:03.576 ***** 2026-02-19 03:26:17.051867 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:17.051877 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-19 03:26:17.051887 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:26:17.051896 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:26:17.051926 | orchestrator | 2026-02-19 03:26:17.051936 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-19 03:26:17.051946 | orchestrator | skipping: no hosts matched 2026-02-19 03:26:17.051955 | orchestrator | 2026-02-19 03:26:17.051965 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-19 03:26:17.051974 | orchestrator | 2026-02-19 03:26:17.051984 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-19 03:26:17.051993 | orchestrator | Thursday 19 February 2026 03:24:46 +0000 (0:00:00.544) 0:01:04.121 ***** 2026-02-19 03:26:17.052003 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:26:17.052012 | orchestrator | 2026-02-19 03:26:17.052022 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-19 03:26:17.052031 | orchestrator | Thursday 19 February 2026 03:25:02 +0000 (0:00:16.078) 0:01:20.200 ***** 2026-02-19 03:26:17.052040 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:26:17.052050 | orchestrator | 2026-02-19 03:26:17.052060 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-19 03:26:17.052069 | orchestrator | Thursday 19 February 2026 03:25:18 +0000 (0:00:15.620) 0:01:35.820 ***** 2026-02-19 03:26:17.052079 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:26:17.052088 | orchestrator | 2026-02-19 03:26:17.052102 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-19 03:26:17.052112 | orchestrator | 2026-02-19 03:26:17.052130 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-19 03:26:17.052140 | orchestrator | Thursday 19 February 2026 03:25:20 +0000 (0:00:02.359) 0:01:38.180 ***** 2026-02-19 03:26:17.052158 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:26:17.052168 | orchestrator | 2026-02-19 03:26:17.052177 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-19 03:26:17.052187 | orchestrator | Thursday 19 February 2026 03:25:37 +0000 (0:00:17.075) 0:01:55.255 ***** 2026-02-19 03:26:17.052196 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:26:17.052205 | orchestrator | 2026-02-19 03:26:17.052215 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-19 03:26:17.052224 | orchestrator | Thursday 19 February 2026 03:25:54 +0000 (0:00:16.649) 0:02:11.905 ***** 2026-02-19 03:26:17.052234 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:26:17.052243 | orchestrator | 2026-02-19 03:26:17.052253 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-19 03:26:17.052262 | orchestrator | 2026-02-19 03:26:17.052272 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-19 03:26:17.052281 | orchestrator | Thursday 19 February 2026 03:25:57 +0000 (0:00:02.803) 0:02:14.708 ***** 2026-02-19 03:26:17.052291 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:17.052300 | orchestrator | 2026-02-19 03:26:17.052309 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-19 03:26:17.052319 | orchestrator | Thursday 19 February 2026 03:26:09 +0000 (0:00:12.273) 0:02:26.982 ***** 2026-02-19 03:26:17.052328 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.052338 | orchestrator | 2026-02-19 03:26:17.052347 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-19 03:26:17.052357 | orchestrator | Thursday 19 February 2026 03:26:13 +0000 (0:00:04.640) 0:02:31.622 ***** 2026-02-19 03:26:17.052366 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:17.052375 | orchestrator | 2026-02-19 03:26:17.052385 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-19 03:26:17.052394 | orchestrator | 2026-02-19 03:26:17.052404 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-19 03:26:17.052413 | orchestrator | Thursday 19 February 2026 03:26:16 +0000 (0:00:02.420) 0:02:34.043 ***** 2026-02-19 03:26:17.052423 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:26:17.052432 | orchestrator | 2026-02-19 03:26:17.052442 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-19 03:26:17.052459 | orchestrator | Thursday 19 February 2026 03:26:17 +0000 (0:00:00.681) 0:02:34.725 ***** 2026-02-19 03:26:29.566574 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:29.566682 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:29.566698 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:29.566710 | orchestrator | 2026-02-19 03:26:29.566723 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-19 03:26:29.566735 | orchestrator | Thursday 19 February 2026 03:26:19 +0000 (0:00:02.402) 0:02:37.127 ***** 2026-02-19 03:26:29.566747 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:29.566758 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:29.566769 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:29.566780 | orchestrator | 2026-02-19 03:26:29.566791 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-19 03:26:29.566802 | orchestrator | Thursday 19 February 2026 03:26:21 +0000 (0:00:02.239) 0:02:39.366 ***** 2026-02-19 03:26:29.566813 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:29.566824 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:29.566835 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:29.566846 | orchestrator | 2026-02-19 03:26:29.566857 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-19 03:26:29.566868 | orchestrator | Thursday 19 February 2026 03:26:24 +0000 (0:00:02.495) 0:02:41.862 ***** 2026-02-19 03:26:29.566879 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:29.566890 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:29.566900 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:26:29.566985 | orchestrator | 2026-02-19 03:26:29.567020 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-19 03:26:29.567031 | orchestrator | Thursday 19 February 2026 03:26:26 +0000 (0:00:02.307) 0:02:44.169 ***** 2026-02-19 03:26:29.567042 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:29.567054 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:26:29.567065 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:26:29.567075 | orchestrator | 2026-02-19 03:26:29.567086 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-19 03:26:29.567097 | orchestrator | Thursday 19 February 2026 03:26:29 +0000 (0:00:02.564) 0:02:46.733 ***** 2026-02-19 03:26:29.567108 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:29.567119 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:26:29.567130 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:26:29.567140 | orchestrator | 2026-02-19 03:26:29.567152 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:26:29.567164 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-19 03:26:29.567176 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-19 03:26:29.567187 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-19 03:26:29.567198 | orchestrator | 2026-02-19 03:26:29.567209 | orchestrator | 2026-02-19 03:26:29.567220 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:26:29.567231 | orchestrator | Thursday 19 February 2026 03:26:29 +0000 (0:00:00.310) 0:02:47.044 ***** 2026-02-19 03:26:29.567242 | orchestrator | =============================================================================== 2026-02-19 03:26:29.567260 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.15s 2026-02-19 03:26:29.567271 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.27s 2026-02-19 03:26:29.567282 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.27s 2026-02-19 03:26:29.567292 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2026-02-19 03:26:29.567303 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.12s 2026-02-19 03:26:29.567314 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.63s 2026-02-19 03:26:29.567325 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.16s 2026-02-19 03:26:29.567336 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.64s 2026-02-19 03:26:29.567347 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.55s 2026-02-19 03:26:29.567357 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.94s 2026-02-19 03:26:29.567368 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.63s 2026-02-19 03:26:29.567379 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.57s 2026-02-19 03:26:29.567389 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.56s 2026-02-19 03:26:29.567400 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.53s 2026-02-19 03:26:29.567411 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.52s 2026-02-19 03:26:29.567422 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.50s 2026-02-19 03:26:29.567432 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.42s 2026-02-19 03:26:29.567443 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.40s 2026-02-19 03:26:29.567454 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.31s 2026-02-19 03:26:29.567465 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.24s 2026-02-19 03:26:31.608361 | orchestrator | 2026-02-19 03:26:31 | INFO  | Task baa3e167-6fb0-4a43-a532-3f1d266e9d8e (rabbitmq) was prepared for execution. 2026-02-19 03:26:31.609358 | orchestrator | 2026-02-19 03:26:31 | INFO  | It takes a moment until task baa3e167-6fb0-4a43-a532-3f1d266e9d8e (rabbitmq) has been started and output is visible here. 2026-02-19 03:26:43.649806 | orchestrator | 2026-02-19 03:26:43.649974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:26:43.650001 | orchestrator | 2026-02-19 03:26:43.650013 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:26:43.650090 | orchestrator | Thursday 19 February 2026 03:26:35 +0000 (0:00:00.137) 0:00:00.137 ***** 2026-02-19 03:26:43.650105 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:43.650120 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:26:43.650134 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:26:43.650147 | orchestrator | 2026-02-19 03:26:43.650162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:26:43.650175 | orchestrator | Thursday 19 February 2026 03:26:35 +0000 (0:00:00.227) 0:00:00.364 ***** 2026-02-19 03:26:43.650189 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-19 03:26:43.650198 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-19 03:26:43.650206 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-19 03:26:43.650214 | orchestrator | 2026-02-19 03:26:43.650222 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-19 03:26:43.650231 | orchestrator | 2026-02-19 03:26:43.650240 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-19 03:26:43.650248 | orchestrator | Thursday 19 February 2026 03:26:36 +0000 (0:00:00.400) 0:00:00.765 ***** 2026-02-19 03:26:43.650256 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:26:43.650265 | orchestrator | 2026-02-19 03:26:43.650273 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-19 03:26:43.650281 | orchestrator | Thursday 19 February 2026 03:26:36 +0000 (0:00:00.383) 0:00:01.148 ***** 2026-02-19 03:26:43.650288 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:43.650296 | orchestrator | 2026-02-19 03:26:43.650304 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-19 03:26:43.650312 | orchestrator | Thursday 19 February 2026 03:26:37 +0000 (0:00:00.956) 0:00:02.105 ***** 2026-02-19 03:26:43.650320 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:43.650330 | orchestrator | 2026-02-19 03:26:43.650340 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-19 03:26:43.650348 | orchestrator | Thursday 19 February 2026 03:26:37 +0000 (0:00:00.330) 0:00:02.436 ***** 2026-02-19 03:26:43.650357 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:43.650366 | orchestrator | 2026-02-19 03:26:43.650375 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-19 03:26:43.650384 | orchestrator | Thursday 19 February 2026 03:26:38 +0000 (0:00:00.322) 0:00:02.758 ***** 2026-02-19 03:26:43.650393 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:43.650402 | orchestrator | 2026-02-19 03:26:43.650412 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-19 03:26:43.650421 | orchestrator | Thursday 19 February 2026 03:26:38 +0000 (0:00:00.354) 0:00:03.113 ***** 2026-02-19 03:26:43.650430 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:43.650439 | orchestrator | 2026-02-19 03:26:43.650448 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-19 03:26:43.650457 | orchestrator | Thursday 19 February 2026 03:26:38 +0000 (0:00:00.432) 0:00:03.546 ***** 2026-02-19 03:26:43.650482 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:26:43.650490 | orchestrator | 2026-02-19 03:26:43.650520 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-19 03:26:43.650528 | orchestrator | Thursday 19 February 2026 03:26:39 +0000 (0:00:00.715) 0:00:04.262 ***** 2026-02-19 03:26:43.650536 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:26:43.650544 | orchestrator | 2026-02-19 03:26:43.650552 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-19 03:26:43.650559 | orchestrator | Thursday 19 February 2026 03:26:40 +0000 (0:00:00.881) 0:00:05.143 ***** 2026-02-19 03:26:43.650568 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:43.650581 | orchestrator | 2026-02-19 03:26:43.650601 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-19 03:26:43.650617 | orchestrator | Thursday 19 February 2026 03:26:40 +0000 (0:00:00.357) 0:00:05.500 ***** 2026-02-19 03:26:43.650629 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:26:43.650642 | orchestrator | 2026-02-19 03:26:43.650655 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-19 03:26:43.650666 | orchestrator | Thursday 19 February 2026 03:26:41 +0000 (0:00:00.355) 0:00:05.856 ***** 2026-02-19 03:26:43.650724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:26:43.650757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:26:43.650773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:26:43.650797 | orchestrator | 2026-02-19 03:26:43.650812 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-19 03:26:43.650820 | orchestrator | Thursday 19 February 2026 03:26:42 +0000 (0:00:00.837) 0:00:06.694 ***** 2026-02-19 03:26:43.650829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:26:43.650847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:27:01.496913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:27:01.497088 | orchestrator | 2026-02-19 03:27:01.497102 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-19 03:27:01.497113 | orchestrator | Thursday 19 February 2026 03:26:43 +0000 (0:00:01.614) 0:00:08.308 ***** 2026-02-19 03:27:01.497145 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-19 03:27:01.497157 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-19 03:27:01.497165 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-19 03:27:01.497173 | orchestrator | 2026-02-19 03:27:01.497181 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-19 03:27:01.497190 | orchestrator | Thursday 19 February 2026 03:26:45 +0000 (0:00:01.469) 0:00:09.778 ***** 2026-02-19 03:27:01.497198 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-19 03:27:01.497220 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-19 03:27:01.497229 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-19 03:27:01.497237 | orchestrator | 2026-02-19 03:27:01.497245 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-19 03:27:01.497253 | orchestrator | Thursday 19 February 2026 03:26:46 +0000 (0:00:01.705) 0:00:11.483 ***** 2026-02-19 03:27:01.497261 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-19 03:27:01.497269 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-19 03:27:01.497277 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-19 03:27:01.497285 | orchestrator | 2026-02-19 03:27:01.497293 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-19 03:27:01.497302 | orchestrator | Thursday 19 February 2026 03:26:48 +0000 (0:00:01.254) 0:00:12.738 ***** 2026-02-19 03:27:01.497310 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-19 03:27:01.497318 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-19 03:27:01.497326 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-19 03:27:01.497334 | orchestrator | 2026-02-19 03:27:01.497342 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-19 03:27:01.497350 | orchestrator | Thursday 19 February 2026 03:26:49 +0000 (0:00:01.524) 0:00:14.262 ***** 2026-02-19 03:27:01.497358 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-19 03:27:01.497366 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-19 03:27:01.497375 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-19 03:27:01.497383 | orchestrator | 2026-02-19 03:27:01.497390 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-19 03:27:01.497398 | orchestrator | Thursday 19 February 2026 03:26:51 +0000 (0:00:01.411) 0:00:15.674 ***** 2026-02-19 03:27:01.497407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-19 03:27:01.497415 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-19 03:27:01.497423 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-19 03:27:01.497431 | orchestrator | 2026-02-19 03:27:01.497439 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-19 03:27:01.497447 | orchestrator | Thursday 19 February 2026 03:26:52 +0000 (0:00:01.370) 0:00:17.044 ***** 2026-02-19 03:27:01.497456 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:27:01.497465 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:27:01.497488 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:27:01.497503 | orchestrator | 2026-02-19 03:27:01.497512 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-19 03:27:01.497521 | orchestrator | Thursday 19 February 2026 03:26:52 +0000 (0:00:00.381) 0:00:17.425 ***** 2026-02-19 03:27:01.497531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:27:01.497545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:27:01.497555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 03:27:01.497564 | orchestrator | 2026-02-19 03:27:01.497572 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-19 03:27:01.497580 | orchestrator | Thursday 19 February 2026 03:26:53 +0000 (0:00:01.154) 0:00:18.580 ***** 2026-02-19 03:27:01.497589 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:27:01.497597 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:27:01.497605 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:27:01.497614 | orchestrator | 2026-02-19 03:27:01.497622 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-19 03:27:01.497634 | orchestrator | Thursday 19 February 2026 03:26:54 +0000 (0:00:00.861) 0:00:19.441 ***** 2026-02-19 03:27:01.497644 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:27:01.497652 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:27:01.497661 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:27:01.497670 | orchestrator | 2026-02-19 03:27:01.497677 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-19 03:27:01.497692 | orchestrator | Thursday 19 February 2026 03:27:01 +0000 (0:00:06.712) 0:00:26.154 ***** 2026-02-19 03:28:42.440884 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:28:42.440995 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:28:42.441011 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:28:42.441022 | orchestrator | 2026-02-19 03:28:42.441034 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-19 03:28:42.441047 | orchestrator | 2026-02-19 03:28:42.441057 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-19 03:28:42.441068 | orchestrator | Thursday 19 February 2026 03:27:02 +0000 (0:00:00.612) 0:00:26.766 ***** 2026-02-19 03:28:42.441079 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:28:42.441091 | orchestrator | 2026-02-19 03:28:42.441101 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-19 03:28:42.441112 | orchestrator | Thursday 19 February 2026 03:27:02 +0000 (0:00:00.611) 0:00:27.378 ***** 2026-02-19 03:28:42.441122 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:28:42.441133 | orchestrator | 2026-02-19 03:28:42.441143 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-19 03:28:42.441153 | orchestrator | Thursday 19 February 2026 03:27:02 +0000 (0:00:00.271) 0:00:27.649 ***** 2026-02-19 03:28:42.441164 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:28:42.441175 | orchestrator | 2026-02-19 03:28:42.441186 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-19 03:28:42.441196 | orchestrator | Thursday 19 February 2026 03:27:04 +0000 (0:00:01.635) 0:00:29.285 ***** 2026-02-19 03:28:42.441206 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:28:42.441217 | orchestrator | 2026-02-19 03:28:42.441228 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-19 03:28:42.441239 | orchestrator | 2026-02-19 03:28:42.441249 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-19 03:28:42.441260 | orchestrator | Thursday 19 February 2026 03:28:01 +0000 (0:00:56.834) 0:01:26.120 ***** 2026-02-19 03:28:42.441270 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:28:42.441281 | orchestrator | 2026-02-19 03:28:42.441291 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-19 03:28:42.441302 | orchestrator | Thursday 19 February 2026 03:28:02 +0000 (0:00:00.691) 0:01:26.811 ***** 2026-02-19 03:28:42.441312 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:28:42.441323 | orchestrator | 2026-02-19 03:28:42.441333 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-19 03:28:42.441344 | orchestrator | Thursday 19 February 2026 03:28:02 +0000 (0:00:00.216) 0:01:27.028 ***** 2026-02-19 03:28:42.441355 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:28:42.441365 | orchestrator | 2026-02-19 03:28:42.441375 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-19 03:28:42.441393 | orchestrator | Thursday 19 February 2026 03:28:03 +0000 (0:00:01.612) 0:01:28.641 ***** 2026-02-19 03:28:42.441404 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:28:42.441415 | orchestrator | 2026-02-19 03:28:42.441426 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-19 03:28:42.441437 | orchestrator | 2026-02-19 03:28:42.441448 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-19 03:28:42.441459 | orchestrator | Thursday 19 February 2026 03:28:20 +0000 (0:00:16.432) 0:01:45.073 ***** 2026-02-19 03:28:42.441469 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:28:42.441480 | orchestrator | 2026-02-19 03:28:42.441491 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-19 03:28:42.441521 | orchestrator | Thursday 19 February 2026 03:28:21 +0000 (0:00:00.760) 0:01:45.834 ***** 2026-02-19 03:28:42.441533 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:28:42.441544 | orchestrator | 2026-02-19 03:28:42.441554 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-19 03:28:42.441565 | orchestrator | Thursday 19 February 2026 03:28:21 +0000 (0:00:00.230) 0:01:46.065 ***** 2026-02-19 03:28:42.441575 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:28:42.441586 | orchestrator | 2026-02-19 03:28:42.441597 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-19 03:28:42.441608 | orchestrator | Thursday 19 February 2026 03:28:22 +0000 (0:00:01.584) 0:01:47.649 ***** 2026-02-19 03:28:42.441619 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:28:42.441629 | orchestrator | 2026-02-19 03:28:42.441640 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-19 03:28:42.441651 | orchestrator | 2026-02-19 03:28:42.441661 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-19 03:28:42.441672 | orchestrator | Thursday 19 February 2026 03:28:38 +0000 (0:00:15.871) 0:02:03.521 ***** 2026-02-19 03:28:42.441682 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:28:42.441693 | orchestrator | 2026-02-19 03:28:42.441703 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-19 03:28:42.441714 | orchestrator | Thursday 19 February 2026 03:28:39 +0000 (0:00:00.497) 0:02:04.019 ***** 2026-02-19 03:28:42.441725 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-19 03:28:42.441735 | orchestrator | enable_outward_rabbitmq_True 2026-02-19 03:28:42.441745 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-19 03:28:42.441755 | orchestrator | outward_rabbitmq_restart 2026-02-19 03:28:42.441766 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:28:42.441797 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:28:42.441809 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:28:42.441819 | orchestrator | 2026-02-19 03:28:42.441830 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-19 03:28:42.441840 | orchestrator | skipping: no hosts matched 2026-02-19 03:28:42.441850 | orchestrator | 2026-02-19 03:28:42.441860 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-19 03:28:42.441871 | orchestrator | skipping: no hosts matched 2026-02-19 03:28:42.441881 | orchestrator | 2026-02-19 03:28:42.441891 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-19 03:28:42.441902 | orchestrator | skipping: no hosts matched 2026-02-19 03:28:42.441912 | orchestrator | 2026-02-19 03:28:42.441922 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:28:42.441948 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-19 03:28:42.441961 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:28:42.441972 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:28:42.441983 | orchestrator | 2026-02-19 03:28:42.441994 | orchestrator | 2026-02-19 03:28:42.442004 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:28:42.442100 | orchestrator | Thursday 19 February 2026 03:28:42 +0000 (0:00:02.672) 0:02:06.691 ***** 2026-02-19 03:28:42.442116 | orchestrator | =============================================================================== 2026-02-19 03:28:42.442127 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.14s 2026-02-19 03:28:42.442137 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.71s 2026-02-19 03:28:42.442158 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.83s 2026-02-19 03:28:42.442169 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.67s 2026-02-19 03:28:42.442179 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.06s 2026-02-19 03:28:42.442189 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.71s 2026-02-19 03:28:42.442199 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.61s 2026-02-19 03:28:42.442208 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.52s 2026-02-19 03:28:42.442218 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.47s 2026-02-19 03:28:42.442227 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.41s 2026-02-19 03:28:42.442236 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.37s 2026-02-19 03:28:42.442246 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.25s 2026-02-19 03:28:42.442256 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.15s 2026-02-19 03:28:42.442266 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-02-19 03:28:42.442281 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.88s 2026-02-19 03:28:42.442292 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.86s 2026-02-19 03:28:42.442302 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.84s 2026-02-19 03:28:42.442312 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.72s 2026-02-19 03:28:42.442322 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.72s 2026-02-19 03:28:42.442333 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 0.61s 2026-02-19 03:28:44.765080 | orchestrator | 2026-02-19 03:28:44 | INFO  | Task 1fdaa47d-d717-41e3-b029-d678064dfa39 (openvswitch) was prepared for execution. 2026-02-19 03:28:44.765161 | orchestrator | 2026-02-19 03:28:44 | INFO  | It takes a moment until task 1fdaa47d-d717-41e3-b029-d678064dfa39 (openvswitch) has been started and output is visible here. 2026-02-19 03:28:57.373918 | orchestrator | 2026-02-19 03:28:57.374082 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:28:57.374103 | orchestrator | 2026-02-19 03:28:57.374115 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:28:57.374128 | orchestrator | Thursday 19 February 2026 03:28:48 +0000 (0:00:00.250) 0:00:00.250 ***** 2026-02-19 03:28:57.374137 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:28:57.374144 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:28:57.374151 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:28:57.374157 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:28:57.374166 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:28:57.374176 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:28:57.374185 | orchestrator | 2026-02-19 03:28:57.374192 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:28:57.374199 | orchestrator | Thursday 19 February 2026 03:28:49 +0000 (0:00:00.680) 0:00:00.930 ***** 2026-02-19 03:28:57.374205 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 03:28:57.374213 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 03:28:57.374219 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 03:28:57.374226 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 03:28:57.374232 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 03:28:57.374238 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 03:28:57.374244 | orchestrator | 2026-02-19 03:28:57.374270 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-19 03:28:57.374277 | orchestrator | 2026-02-19 03:28:57.374284 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-19 03:28:57.374290 | orchestrator | Thursday 19 February 2026 03:28:50 +0000 (0:00:00.596) 0:00:01.527 ***** 2026-02-19 03:28:57.374297 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:28:57.374304 | orchestrator | 2026-02-19 03:28:57.374310 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-19 03:28:57.374316 | orchestrator | Thursday 19 February 2026 03:28:51 +0000 (0:00:01.107) 0:00:02.635 ***** 2026-02-19 03:28:57.374323 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-19 03:28:57.374329 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-19 03:28:57.374335 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-19 03:28:57.374341 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-19 03:28:57.374347 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-19 03:28:57.374353 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-19 03:28:57.374359 | orchestrator | 2026-02-19 03:28:57.374365 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-19 03:28:57.374372 | orchestrator | Thursday 19 February 2026 03:28:52 +0000 (0:00:01.156) 0:00:03.791 ***** 2026-02-19 03:28:57.374378 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-19 03:28:57.374384 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-19 03:28:57.374390 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-19 03:28:57.374396 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-19 03:28:57.374402 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-19 03:28:57.374408 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-19 03:28:57.374414 | orchestrator | 2026-02-19 03:28:57.374420 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-19 03:28:57.374426 | orchestrator | Thursday 19 February 2026 03:28:54 +0000 (0:00:01.558) 0:00:05.350 ***** 2026-02-19 03:28:57.374432 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-19 03:28:57.374438 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:28:57.374446 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-19 03:28:57.374452 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:28:57.374458 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-19 03:28:57.374464 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:28:57.374470 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-19 03:28:57.374476 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:28:57.374482 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-19 03:28:57.374488 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:28:57.374494 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-19 03:28:57.374500 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:28:57.374506 | orchestrator | 2026-02-19 03:28:57.374512 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-19 03:28:57.374519 | orchestrator | Thursday 19 February 2026 03:28:55 +0000 (0:00:01.133) 0:00:06.483 ***** 2026-02-19 03:28:57.374525 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:28:57.374531 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:28:57.374537 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:28:57.374543 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:28:57.374549 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:28:57.374555 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:28:57.374561 | orchestrator | 2026-02-19 03:28:57.374567 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-19 03:28:57.374578 | orchestrator | Thursday 19 February 2026 03:28:55 +0000 (0:00:00.766) 0:00:07.250 ***** 2026-02-19 03:28:57.374603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:57.374615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:57.374621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:57.374691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:57.374712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:57.374730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856212 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856336 | orchestrator | 2026-02-19 03:28:59.856352 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-19 03:28:59.856367 | orchestrator | Thursday 19 February 2026 03:28:57 +0000 (0:00:01.444) 0:00:08.695 ***** 2026-02-19 03:28:59.856383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:28:59.856482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:29:02.621636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:02.621820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:02.621839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:02.621866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:02.621894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:02.621920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:02.621931 | orchestrator | 2026-02-19 03:29:02.621942 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-19 03:29:02.621953 | orchestrator | Thursday 19 February 2026 03:28:59 +0000 (0:00:02.491) 0:00:11.186 ***** 2026-02-19 03:29:02.621962 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:29:02.621972 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:29:02.621980 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:29:02.622001 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:29:02.622010 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:29:02.622157 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:29:02.622168 | orchestrator | 2026-02-19 03:29:02.622178 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-19 03:29:02.622189 | orchestrator | Thursday 19 February 2026 03:29:00 +0000 (0:00:00.930) 0:00:12.116 ***** 2026-02-19 03:29:02.622201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:29:02.622218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:29:02.622258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:29:02.622279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:29:02.622308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:29:28.273278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 03:29:28.273397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:28.273413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:28.273461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:28.273473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:28.273500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:28.273511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 03:29:28.273521 | orchestrator | 2026-02-19 03:29:28.273531 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 03:29:28.273542 | orchestrator | Thursday 19 February 2026 03:29:02 +0000 (0:00:01.830) 0:00:13.947 ***** 2026-02-19 03:29:28.273550 | orchestrator | 2026-02-19 03:29:28.273558 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 03:29:28.273566 | orchestrator | Thursday 19 February 2026 03:29:02 +0000 (0:00:00.286) 0:00:14.233 ***** 2026-02-19 03:29:28.273575 | orchestrator | 2026-02-19 03:29:28.273590 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 03:29:28.273598 | orchestrator | Thursday 19 February 2026 03:29:03 +0000 (0:00:00.133) 0:00:14.367 ***** 2026-02-19 03:29:28.273606 | orchestrator | 2026-02-19 03:29:28.273614 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 03:29:28.273622 | orchestrator | Thursday 19 February 2026 03:29:03 +0000 (0:00:00.127) 0:00:14.495 ***** 2026-02-19 03:29:28.273631 | orchestrator | 2026-02-19 03:29:28.273641 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 03:29:28.273650 | orchestrator | Thursday 19 February 2026 03:29:03 +0000 (0:00:00.131) 0:00:14.627 ***** 2026-02-19 03:29:28.273660 | orchestrator | 2026-02-19 03:29:28.273669 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 03:29:28.273679 | orchestrator | Thursday 19 February 2026 03:29:03 +0000 (0:00:00.128) 0:00:14.755 ***** 2026-02-19 03:29:28.273687 | orchestrator | 2026-02-19 03:29:28.273696 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-19 03:29:28.273705 | orchestrator | Thursday 19 February 2026 03:29:03 +0000 (0:00:00.126) 0:00:14.882 ***** 2026-02-19 03:29:28.273741 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:29:28.273753 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:29:28.273764 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:29:28.273774 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:29:28.273785 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:29:28.273797 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:29:28.273808 | orchestrator | 2026-02-19 03:29:28.273819 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-19 03:29:28.273832 | orchestrator | Thursday 19 February 2026 03:29:12 +0000 (0:00:08.681) 0:00:23.564 ***** 2026-02-19 03:29:28.273844 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:29:28.273863 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:29:28.273873 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:29:28.273883 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:29:28.273892 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:29:28.273902 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:29:28.273911 | orchestrator | 2026-02-19 03:29:28.273921 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-19 03:29:28.273930 | orchestrator | Thursday 19 February 2026 03:29:13 +0000 (0:00:01.128) 0:00:24.692 ***** 2026-02-19 03:29:28.273939 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:29:28.273948 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:29:28.273957 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:29:28.273967 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:29:28.273977 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:29:28.273986 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:29:28.273995 | orchestrator | 2026-02-19 03:29:28.274005 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-19 03:29:28.274062 | orchestrator | Thursday 19 February 2026 03:29:21 +0000 (0:00:07.759) 0:00:32.451 ***** 2026-02-19 03:29:28.274078 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-19 03:29:28.274090 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-19 03:29:28.274099 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-19 03:29:28.274107 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-19 03:29:28.274116 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-19 03:29:28.274126 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-19 03:29:28.274137 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-19 03:29:28.274165 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-19 03:29:41.441557 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-19 03:29:41.441798 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-19 03:29:41.441828 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-19 03:29:41.441844 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-19 03:29:41.441857 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 03:29:41.441871 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 03:29:41.441884 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 03:29:41.441897 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 03:29:41.441910 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 03:29:41.441924 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 03:29:41.441938 | orchestrator | 2026-02-19 03:29:41.441952 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-19 03:29:41.441967 | orchestrator | Thursday 19 February 2026 03:29:28 +0000 (0:00:07.055) 0:00:39.506 ***** 2026-02-19 03:29:41.441983 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-19 03:29:41.441998 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:29:41.442078 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-19 03:29:41.442097 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:29:41.442112 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-19 03:29:41.442128 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:29:41.442143 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-19 03:29:41.442157 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-19 03:29:41.442172 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-19 03:29:41.442187 | orchestrator | 2026-02-19 03:29:41.442202 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-19 03:29:41.442217 | orchestrator | Thursday 19 February 2026 03:29:30 +0000 (0:00:02.480) 0:00:41.986 ***** 2026-02-19 03:29:41.442231 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-19 03:29:41.442247 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:29:41.442263 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-19 03:29:41.442279 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:29:41.442294 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-19 03:29:41.442309 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:29:41.442323 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-19 03:29:41.442337 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-19 03:29:41.442371 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-19 03:29:41.442387 | orchestrator | 2026-02-19 03:29:41.442402 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-19 03:29:41.442416 | orchestrator | Thursday 19 February 2026 03:29:33 +0000 (0:00:03.144) 0:00:45.131 ***** 2026-02-19 03:29:41.442432 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:29:41.442446 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:29:41.442487 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:29:41.442504 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:29:41.442519 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:29:41.442533 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:29:41.442548 | orchestrator | 2026-02-19 03:29:41.442564 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:29:41.442580 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 03:29:41.442596 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 03:29:41.442609 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 03:29:41.442624 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 03:29:41.442640 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 03:29:41.442654 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 03:29:41.442668 | orchestrator | 2026-02-19 03:29:41.442682 | orchestrator | 2026-02-19 03:29:41.442690 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:29:41.442728 | orchestrator | Thursday 19 February 2026 03:29:40 +0000 (0:00:07.095) 0:00:52.226 ***** 2026-02-19 03:29:41.442758 | orchestrator | =============================================================================== 2026-02-19 03:29:41.442766 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.85s 2026-02-19 03:29:41.442774 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.68s 2026-02-19 03:29:41.442782 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.06s 2026-02-19 03:29:41.442790 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.14s 2026-02-19 03:29:41.442797 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.49s 2026-02-19 03:29:41.442805 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.48s 2026-02-19 03:29:41.442813 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.83s 2026-02-19 03:29:41.442821 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.56s 2026-02-19 03:29:41.442829 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.44s 2026-02-19 03:29:41.442836 | orchestrator | module-load : Load modules ---------------------------------------------- 1.16s 2026-02-19 03:29:41.442844 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.13s 2026-02-19 03:29:41.442852 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.13s 2026-02-19 03:29:41.442860 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.11s 2026-02-19 03:29:41.442868 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.93s 2026-02-19 03:29:41.442875 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.93s 2026-02-19 03:29:41.442883 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.77s 2026-02-19 03:29:41.442891 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2026-02-19 03:29:41.442898 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-02-19 03:29:43.865520 | orchestrator | 2026-02-19 03:29:43 | INFO  | Task 27b58272-12a5-4f96-a8ad-5a7ecd713b01 (ovn) was prepared for execution. 2026-02-19 03:29:43.865637 | orchestrator | 2026-02-19 03:29:43 | INFO  | It takes a moment until task 27b58272-12a5-4f96-a8ad-5a7ecd713b01 (ovn) has been started and output is visible here. 2026-02-19 03:29:54.171236 | orchestrator | 2026-02-19 03:29:54.171358 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:29:54.171367 | orchestrator | 2026-02-19 03:29:54.171371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:29:54.171376 | orchestrator | Thursday 19 February 2026 03:29:47 +0000 (0:00:00.164) 0:00:00.164 ***** 2026-02-19 03:29:54.171380 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:29:54.171386 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:29:54.171390 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:29:54.171394 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:29:54.171398 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:29:54.171402 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:29:54.171406 | orchestrator | 2026-02-19 03:29:54.171410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:29:54.171414 | orchestrator | Thursday 19 February 2026 03:29:48 +0000 (0:00:00.682) 0:00:00.847 ***** 2026-02-19 03:29:54.171433 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-19 03:29:54.171439 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-19 03:29:54.171442 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-19 03:29:54.171446 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-19 03:29:54.171450 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-19 03:29:54.171454 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-19 03:29:54.171457 | orchestrator | 2026-02-19 03:29:54.171462 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-19 03:29:54.171466 | orchestrator | 2026-02-19 03:29:54.171469 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-19 03:29:54.171473 | orchestrator | Thursday 19 February 2026 03:29:49 +0000 (0:00:00.809) 0:00:01.657 ***** 2026-02-19 03:29:54.171477 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:29:54.171483 | orchestrator | 2026-02-19 03:29:54.171487 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-19 03:29:54.171491 | orchestrator | Thursday 19 February 2026 03:29:50 +0000 (0:00:01.051) 0:00:02.708 ***** 2026-02-19 03:29:54.171496 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171897 | orchestrator | 2026-02-19 03:29:54.171905 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-19 03:29:54.171911 | orchestrator | Thursday 19 February 2026 03:29:51 +0000 (0:00:01.143) 0:00:03.852 ***** 2026-02-19 03:29:54.171933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171973 | orchestrator | 2026-02-19 03:29:54.171977 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-19 03:29:54.171981 | orchestrator | Thursday 19 February 2026 03:29:53 +0000 (0:00:01.490) 0:00:05.343 ***** 2026-02-19 03:29:54.171985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.171989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:29:54.172001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801506 | orchestrator | 2026-02-19 03:30:20.801515 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-19 03:30:20.801534 | orchestrator | Thursday 19 February 2026 03:29:54 +0000 (0:00:01.147) 0:00:06.490 ***** 2026-02-19 03:30:20.801542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801636 | orchestrator | 2026-02-19 03:30:20.801643 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-19 03:30:20.801686 | orchestrator | Thursday 19 February 2026 03:29:55 +0000 (0:00:01.550) 0:00:08.041 ***** 2026-02-19 03:30:20.801700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:20.801759 | orchestrator | 2026-02-19 03:30:20.801766 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-19 03:30:20.801774 | orchestrator | Thursday 19 February 2026 03:29:57 +0000 (0:00:01.338) 0:00:09.379 ***** 2026-02-19 03:30:20.801782 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:30:20.801791 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:30:20.801798 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:30:20.801804 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:30:20.801811 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:30:20.801819 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:30:20.801825 | orchestrator | 2026-02-19 03:30:20.801833 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-19 03:30:20.801840 | orchestrator | Thursday 19 February 2026 03:29:59 +0000 (0:00:02.582) 0:00:11.962 ***** 2026-02-19 03:30:20.801847 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-19 03:30:20.801855 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-19 03:30:20.801861 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-19 03:30:20.801867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-19 03:30:20.801874 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-19 03:30:20.801880 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-19 03:30:20.801894 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 03:30:47.880363 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 03:30:47.880476 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 03:30:47.880504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 03:30:47.880514 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 03:30:47.880523 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 03:30:47.880533 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-19 03:30:47.880553 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-19 03:30:47.880582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-19 03:30:47.880591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-19 03:30:47.880600 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-19 03:30:47.880609 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-19 03:30:47.880642 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 03:30:47.880652 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 03:30:47.880661 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 03:30:47.880669 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 03:30:47.880679 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 03:30:47.880688 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 03:30:47.880696 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 03:30:47.880705 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 03:30:47.880713 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 03:30:47.880722 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 03:30:47.880730 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 03:30:47.880739 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 03:30:47.880748 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 03:30:47.880757 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 03:30:47.880765 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 03:30:47.880774 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 03:30:47.880782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 03:30:47.880791 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-19 03:30:47.880800 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 03:30:47.880808 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-19 03:30:47.880817 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-19 03:30:47.880826 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-19 03:30:47.880835 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-19 03:30:47.880845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-19 03:30:47.880853 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-19 03:30:47.880882 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-19 03:30:47.880895 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-19 03:30:47.880910 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-19 03:30:47.880921 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-19 03:30:47.880931 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-19 03:30:47.880941 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-19 03:30:47.880951 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-19 03:30:47.880961 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-19 03:30:47.880972 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-19 03:30:47.880982 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-19 03:30:47.880991 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-19 03:30:47.881001 | orchestrator | 2026-02-19 03:30:47.881013 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 03:30:47.881023 | orchestrator | Thursday 19 February 2026 03:30:20 +0000 (0:00:20.620) 0:00:32.582 ***** 2026-02-19 03:30:47.881033 | orchestrator | 2026-02-19 03:30:47.881044 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 03:30:47.881054 | orchestrator | Thursday 19 February 2026 03:30:20 +0000 (0:00:00.212) 0:00:32.795 ***** 2026-02-19 03:30:47.881063 | orchestrator | 2026-02-19 03:30:47.881073 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 03:30:47.881083 | orchestrator | Thursday 19 February 2026 03:30:20 +0000 (0:00:00.064) 0:00:32.860 ***** 2026-02-19 03:30:47.881094 | orchestrator | 2026-02-19 03:30:47.881114 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 03:30:47.881125 | orchestrator | Thursday 19 February 2026 03:30:20 +0000 (0:00:00.064) 0:00:32.924 ***** 2026-02-19 03:30:47.881134 | orchestrator | 2026-02-19 03:30:47.881145 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 03:30:47.881154 | orchestrator | Thursday 19 February 2026 03:30:20 +0000 (0:00:00.062) 0:00:32.987 ***** 2026-02-19 03:30:47.881164 | orchestrator | 2026-02-19 03:30:47.881174 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 03:30:47.881184 | orchestrator | Thursday 19 February 2026 03:30:20 +0000 (0:00:00.063) 0:00:33.050 ***** 2026-02-19 03:30:47.881194 | orchestrator | 2026-02-19 03:30:47.881203 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-19 03:30:47.881213 | orchestrator | Thursday 19 February 2026 03:30:20 +0000 (0:00:00.061) 0:00:33.112 ***** 2026-02-19 03:30:47.881224 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:30:47.881235 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:30:47.881245 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:30:47.881255 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:30:47.881265 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:30:47.881275 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:30:47.881286 | orchestrator | 2026-02-19 03:30:47.881296 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-19 03:30:47.881306 | orchestrator | Thursday 19 February 2026 03:30:22 +0000 (0:00:01.604) 0:00:34.717 ***** 2026-02-19 03:30:47.881323 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:30:47.881333 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:30:47.881342 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:30:47.881350 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:30:47.881359 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:30:47.881368 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:30:47.881376 | orchestrator | 2026-02-19 03:30:47.881385 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-19 03:30:47.881394 | orchestrator | 2026-02-19 03:30:47.881405 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-19 03:30:47.881420 | orchestrator | Thursday 19 February 2026 03:30:45 +0000 (0:00:23.269) 0:00:57.987 ***** 2026-02-19 03:30:47.881435 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:30:47.881449 | orchestrator | 2026-02-19 03:30:47.881462 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-19 03:30:47.881476 | orchestrator | Thursday 19 February 2026 03:30:46 +0000 (0:00:00.681) 0:00:58.668 ***** 2026-02-19 03:30:47.881491 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:30:47.881505 | orchestrator | 2026-02-19 03:30:47.881519 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-19 03:30:47.881534 | orchestrator | Thursday 19 February 2026 03:30:46 +0000 (0:00:00.547) 0:00:59.216 ***** 2026-02-19 03:30:47.881548 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:30:47.881562 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:30:47.881578 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:30:47.881595 | orchestrator | 2026-02-19 03:30:47.881612 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-19 03:30:47.881696 | orchestrator | Thursday 19 February 2026 03:30:47 +0000 (0:00:00.975) 0:01:00.191 ***** 2026-02-19 03:30:58.763463 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:30:58.763637 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:30:58.763661 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:30:58.763673 | orchestrator | 2026-02-19 03:30:58.763685 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-19 03:30:58.763712 | orchestrator | Thursday 19 February 2026 03:30:48 +0000 (0:00:00.345) 0:01:00.537 ***** 2026-02-19 03:30:58.763722 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:30:58.763746 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:30:58.763764 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:30:58.763774 | orchestrator | 2026-02-19 03:30:58.763784 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-19 03:30:58.763794 | orchestrator | Thursday 19 February 2026 03:30:48 +0000 (0:00:00.338) 0:01:00.876 ***** 2026-02-19 03:30:58.763803 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:30:58.763813 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:30:58.763822 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:30:58.763832 | orchestrator | 2026-02-19 03:30:58.763841 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-19 03:30:58.763851 | orchestrator | Thursday 19 February 2026 03:30:48 +0000 (0:00:00.334) 0:01:01.211 ***** 2026-02-19 03:30:58.763861 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:30:58.763870 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:30:58.763879 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:30:58.763889 | orchestrator | 2026-02-19 03:30:58.763898 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-19 03:30:58.763908 | orchestrator | Thursday 19 February 2026 03:30:49 +0000 (0:00:00.498) 0:01:01.709 ***** 2026-02-19 03:30:58.763918 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.763929 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.763938 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.763948 | orchestrator | 2026-02-19 03:30:58.763957 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-19 03:30:58.763988 | orchestrator | Thursday 19 February 2026 03:30:49 +0000 (0:00:00.285) 0:01:01.995 ***** 2026-02-19 03:30:58.763998 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764009 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764020 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764031 | orchestrator | 2026-02-19 03:30:58.764043 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-19 03:30:58.764054 | orchestrator | Thursday 19 February 2026 03:30:49 +0000 (0:00:00.309) 0:01:02.305 ***** 2026-02-19 03:30:58.764065 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764076 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764086 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764097 | orchestrator | 2026-02-19 03:30:58.764108 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-19 03:30:58.764120 | orchestrator | Thursday 19 February 2026 03:30:50 +0000 (0:00:00.303) 0:01:02.608 ***** 2026-02-19 03:30:58.764131 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764141 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764152 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764163 | orchestrator | 2026-02-19 03:30:58.764174 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-19 03:30:58.764185 | orchestrator | Thursday 19 February 2026 03:30:50 +0000 (0:00:00.287) 0:01:02.896 ***** 2026-02-19 03:30:58.764196 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764207 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764219 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764230 | orchestrator | 2026-02-19 03:30:58.764241 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-19 03:30:58.764252 | orchestrator | Thursday 19 February 2026 03:30:51 +0000 (0:00:00.487) 0:01:03.383 ***** 2026-02-19 03:30:58.764263 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764272 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764282 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764291 | orchestrator | 2026-02-19 03:30:58.764301 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-19 03:30:58.764310 | orchestrator | Thursday 19 February 2026 03:30:51 +0000 (0:00:00.297) 0:01:03.681 ***** 2026-02-19 03:30:58.764320 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764329 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764338 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764348 | orchestrator | 2026-02-19 03:30:58.764357 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-19 03:30:58.764367 | orchestrator | Thursday 19 February 2026 03:30:51 +0000 (0:00:00.303) 0:01:03.984 ***** 2026-02-19 03:30:58.764376 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764386 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764395 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764404 | orchestrator | 2026-02-19 03:30:58.764414 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-19 03:30:58.764423 | orchestrator | Thursday 19 February 2026 03:30:51 +0000 (0:00:00.279) 0:01:04.264 ***** 2026-02-19 03:30:58.764432 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764442 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764451 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764460 | orchestrator | 2026-02-19 03:30:58.764470 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-19 03:30:58.764480 | orchestrator | Thursday 19 February 2026 03:30:52 +0000 (0:00:00.481) 0:01:04.746 ***** 2026-02-19 03:30:58.764497 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764515 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764533 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764550 | orchestrator | 2026-02-19 03:30:58.764568 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-19 03:30:58.764597 | orchestrator | Thursday 19 February 2026 03:30:52 +0000 (0:00:00.302) 0:01:05.048 ***** 2026-02-19 03:30:58.764637 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764653 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764670 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764680 | orchestrator | 2026-02-19 03:30:58.764689 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-19 03:30:58.764699 | orchestrator | Thursday 19 February 2026 03:30:53 +0000 (0:00:00.303) 0:01:05.352 ***** 2026-02-19 03:30:58.764726 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764736 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764746 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764755 | orchestrator | 2026-02-19 03:30:58.764765 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-19 03:30:58.764780 | orchestrator | Thursday 19 February 2026 03:30:53 +0000 (0:00:00.293) 0:01:05.645 ***** 2026-02-19 03:30:58.764791 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:30:58.764801 | orchestrator | 2026-02-19 03:30:58.764810 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-19 03:30:58.764820 | orchestrator | Thursday 19 February 2026 03:30:54 +0000 (0:00:00.718) 0:01:06.364 ***** 2026-02-19 03:30:58.764829 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:30:58.764838 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:30:58.764847 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:30:58.764857 | orchestrator | 2026-02-19 03:30:58.764866 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-19 03:30:58.764876 | orchestrator | Thursday 19 February 2026 03:30:54 +0000 (0:00:00.428) 0:01:06.793 ***** 2026-02-19 03:30:58.764885 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:30:58.764894 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:30:58.764904 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:30:58.764913 | orchestrator | 2026-02-19 03:30:58.764923 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-19 03:30:58.764932 | orchestrator | Thursday 19 February 2026 03:30:54 +0000 (0:00:00.445) 0:01:07.238 ***** 2026-02-19 03:30:58.764941 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.764951 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.764960 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.764970 | orchestrator | 2026-02-19 03:30:58.764979 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-19 03:30:58.764988 | orchestrator | Thursday 19 February 2026 03:30:55 +0000 (0:00:00.314) 0:01:07.553 ***** 2026-02-19 03:30:58.764998 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.765007 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.765016 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.765026 | orchestrator | 2026-02-19 03:30:58.765035 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-19 03:30:58.765044 | orchestrator | Thursday 19 February 2026 03:30:55 +0000 (0:00:00.503) 0:01:08.056 ***** 2026-02-19 03:30:58.765054 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.765063 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.765072 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.765081 | orchestrator | 2026-02-19 03:30:58.765091 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-19 03:30:58.765100 | orchestrator | Thursday 19 February 2026 03:30:56 +0000 (0:00:00.316) 0:01:08.373 ***** 2026-02-19 03:30:58.765109 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.765119 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.765128 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.765137 | orchestrator | 2026-02-19 03:30:58.765147 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-19 03:30:58.765156 | orchestrator | Thursday 19 February 2026 03:30:56 +0000 (0:00:00.325) 0:01:08.698 ***** 2026-02-19 03:30:58.765176 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.765186 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.765195 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.765204 | orchestrator | 2026-02-19 03:30:58.765214 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-19 03:30:58.765223 | orchestrator | Thursday 19 February 2026 03:30:56 +0000 (0:00:00.341) 0:01:09.040 ***** 2026-02-19 03:30:58.765232 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:30:58.765242 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:30:58.765251 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:30:58.765260 | orchestrator | 2026-02-19 03:30:58.765270 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-19 03:30:58.765279 | orchestrator | Thursday 19 February 2026 03:30:57 +0000 (0:00:00.524) 0:01:09.564 ***** 2026-02-19 03:30:58.765291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:58.765304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:58.765314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:30:58.765337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373743 | orchestrator | 2026-02-19 03:31:05.373751 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-19 03:31:05.373758 | orchestrator | Thursday 19 February 2026 03:30:58 +0000 (0:00:01.508) 0:01:11.073 ***** 2026-02-19 03:31:05.373766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373908 | orchestrator | 2026-02-19 03:31:05.373918 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-19 03:31:05.373929 | orchestrator | Thursday 19 February 2026 03:31:02 +0000 (0:00:03.966) 0:01:15.040 ***** 2026-02-19 03:31:05.373941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.373982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:05.374006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.011277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.011412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.011428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.011441 | orchestrator | 2026-02-19 03:31:35.011454 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 03:31:35.011466 | orchestrator | Thursday 19 February 2026 03:31:04 +0000 (0:00:02.146) 0:01:17.186 ***** 2026-02-19 03:31:35.011477 | orchestrator | 2026-02-19 03:31:35.011488 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 03:31:35.011499 | orchestrator | Thursday 19 February 2026 03:31:04 +0000 (0:00:00.068) 0:01:17.254 ***** 2026-02-19 03:31:35.011509 | orchestrator | 2026-02-19 03:31:35.011520 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 03:31:35.011531 | orchestrator | Thursday 19 February 2026 03:31:05 +0000 (0:00:00.351) 0:01:17.606 ***** 2026-02-19 03:31:35.011542 | orchestrator | 2026-02-19 03:31:35.011553 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-19 03:31:35.011564 | orchestrator | Thursday 19 February 2026 03:31:05 +0000 (0:00:00.072) 0:01:17.678 ***** 2026-02-19 03:31:35.011676 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:31:35.011691 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:31:35.011701 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:31:35.011712 | orchestrator | 2026-02-19 03:31:35.011723 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-19 03:31:35.011734 | orchestrator | Thursday 19 February 2026 03:31:11 +0000 (0:00:06.634) 0:01:24.314 ***** 2026-02-19 03:31:35.011745 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:31:35.011758 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:31:35.011771 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:31:35.011783 | orchestrator | 2026-02-19 03:31:35.011797 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-19 03:31:35.011809 | orchestrator | Thursday 19 February 2026 03:31:18 +0000 (0:00:06.606) 0:01:30.920 ***** 2026-02-19 03:31:35.011821 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:31:35.011834 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:31:35.011847 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:31:35.011859 | orchestrator | 2026-02-19 03:31:35.011872 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-19 03:31:35.011885 | orchestrator | Thursday 19 February 2026 03:31:27 +0000 (0:00:09.153) 0:01:40.074 ***** 2026-02-19 03:31:35.011897 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:31:35.011909 | orchestrator | 2026-02-19 03:31:35.011922 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-19 03:31:35.011934 | orchestrator | Thursday 19 February 2026 03:31:27 +0000 (0:00:00.133) 0:01:40.207 ***** 2026-02-19 03:31:35.011947 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:31:35.011961 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:31:35.011974 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:31:35.011986 | orchestrator | 2026-02-19 03:31:35.011998 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-19 03:31:35.012010 | orchestrator | Thursday 19 February 2026 03:31:28 +0000 (0:00:01.015) 0:01:41.223 ***** 2026-02-19 03:31:35.012023 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:31:35.012051 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:31:35.012071 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:31:35.012089 | orchestrator | 2026-02-19 03:31:35.012107 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-19 03:31:35.012126 | orchestrator | Thursday 19 February 2026 03:31:29 +0000 (0:00:00.634) 0:01:41.857 ***** 2026-02-19 03:31:35.012146 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:31:35.012165 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:31:35.012183 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:31:35.012201 | orchestrator | 2026-02-19 03:31:35.012212 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-19 03:31:35.012238 | orchestrator | Thursday 19 February 2026 03:31:30 +0000 (0:00:00.782) 0:01:42.640 ***** 2026-02-19 03:31:35.012250 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:31:35.012261 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:31:35.012271 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:31:35.012282 | orchestrator | 2026-02-19 03:31:35.012293 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-19 03:31:35.012303 | orchestrator | Thursday 19 February 2026 03:31:31 +0000 (0:00:00.703) 0:01:43.344 ***** 2026-02-19 03:31:35.012314 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:31:35.012325 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:31:35.012353 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:31:35.012365 | orchestrator | 2026-02-19 03:31:35.012376 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-19 03:31:35.012386 | orchestrator | Thursday 19 February 2026 03:31:32 +0000 (0:00:01.300) 0:01:44.644 ***** 2026-02-19 03:31:35.012397 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:31:35.012407 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:31:35.012418 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:31:35.012429 | orchestrator | 2026-02-19 03:31:35.012440 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-19 03:31:35.012451 | orchestrator | Thursday 19 February 2026 03:31:33 +0000 (0:00:00.824) 0:01:45.469 ***** 2026-02-19 03:31:35.012461 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:31:35.012472 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:31:35.012482 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:31:35.012493 | orchestrator | 2026-02-19 03:31:35.012503 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-19 03:31:35.012514 | orchestrator | Thursday 19 February 2026 03:31:33 +0000 (0:00:00.318) 0:01:45.787 ***** 2026-02-19 03:31:35.012527 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.012540 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.012552 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.012563 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.012608 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.012620 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.012632 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.012648 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:35.012669 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.295938 | orchestrator | 2026-02-19 03:31:42.296077 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-19 03:31:42.296097 | orchestrator | Thursday 19 February 2026 03:31:34 +0000 (0:00:01.535) 0:01:47.323 ***** 2026-02-19 03:31:42.296111 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296127 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296139 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296151 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296214 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296264 | orchestrator | 2026-02-19 03:31:42.296275 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-19 03:31:42.296286 | orchestrator | Thursday 19 February 2026 03:31:38 +0000 (0:00:03.959) 0:01:51.283 ***** 2026-02-19 03:31:42.296319 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296331 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296342 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296353 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296396 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 03:31:42.296435 | orchestrator | 2026-02-19 03:31:42.296446 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 03:31:42.296457 | orchestrator | Thursday 19 February 2026 03:31:42 +0000 (0:00:03.120) 0:01:54.404 ***** 2026-02-19 03:31:42.296468 | orchestrator | 2026-02-19 03:31:42.296479 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 03:31:42.296490 | orchestrator | Thursday 19 February 2026 03:31:42 +0000 (0:00:00.069) 0:01:54.473 ***** 2026-02-19 03:31:42.296501 | orchestrator | 2026-02-19 03:31:42.296511 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 03:31:42.296522 | orchestrator | Thursday 19 February 2026 03:31:42 +0000 (0:00:00.061) 0:01:54.535 ***** 2026-02-19 03:31:42.296533 | orchestrator | 2026-02-19 03:31:42.296552 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-19 03:32:06.545510 | orchestrator | Thursday 19 February 2026 03:31:42 +0000 (0:00:00.064) 0:01:54.600 ***** 2026-02-19 03:32:06.545625 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:32:06.545638 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:32:06.545651 | orchestrator | 2026-02-19 03:32:06.545672 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-19 03:32:06.545688 | orchestrator | Thursday 19 February 2026 03:31:48 +0000 (0:00:06.214) 0:02:00.814 ***** 2026-02-19 03:32:06.545718 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:32:06.545734 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:32:06.545743 | orchestrator | 2026-02-19 03:32:06.545751 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-19 03:32:06.545779 | orchestrator | Thursday 19 February 2026 03:31:54 +0000 (0:00:06.189) 0:02:07.003 ***** 2026-02-19 03:32:06.545787 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:32:06.545795 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:32:06.545803 | orchestrator | 2026-02-19 03:32:06.545811 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-19 03:32:06.545819 | orchestrator | Thursday 19 February 2026 03:32:00 +0000 (0:00:06.258) 0:02:13.262 ***** 2026-02-19 03:32:06.545826 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:32:06.545834 | orchestrator | 2026-02-19 03:32:06.545842 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-19 03:32:06.545850 | orchestrator | Thursday 19 February 2026 03:32:01 +0000 (0:00:00.134) 0:02:13.396 ***** 2026-02-19 03:32:06.545857 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:32:06.545866 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:32:06.545874 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:32:06.545882 | orchestrator | 2026-02-19 03:32:06.545889 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-19 03:32:06.545897 | orchestrator | Thursday 19 February 2026 03:32:02 +0000 (0:00:01.068) 0:02:14.465 ***** 2026-02-19 03:32:06.545905 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:32:06.545913 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:32:06.545921 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:32:06.545928 | orchestrator | 2026-02-19 03:32:06.545936 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-19 03:32:06.545944 | orchestrator | Thursday 19 February 2026 03:32:02 +0000 (0:00:00.698) 0:02:15.163 ***** 2026-02-19 03:32:06.545952 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:32:06.545960 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:32:06.545968 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:32:06.545975 | orchestrator | 2026-02-19 03:32:06.545983 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-19 03:32:06.545991 | orchestrator | Thursday 19 February 2026 03:32:03 +0000 (0:00:00.795) 0:02:15.958 ***** 2026-02-19 03:32:06.545999 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:32:06.546007 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:32:06.546059 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:32:06.546070 | orchestrator | 2026-02-19 03:32:06.546080 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-19 03:32:06.546089 | orchestrator | Thursday 19 February 2026 03:32:04 +0000 (0:00:00.648) 0:02:16.607 ***** 2026-02-19 03:32:06.546109 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:32:06.546118 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:32:06.546136 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:32:06.546145 | orchestrator | 2026-02-19 03:32:06.546154 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-19 03:32:06.546165 | orchestrator | Thursday 19 February 2026 03:32:05 +0000 (0:00:01.048) 0:02:17.655 ***** 2026-02-19 03:32:06.546180 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:32:06.546203 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:32:06.546217 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:32:06.546231 | orchestrator | 2026-02-19 03:32:06.546245 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:32:06.546263 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-19 03:32:06.546280 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-19 03:32:06.546312 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-19 03:32:06.546326 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:32:06.546356 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:32:06.546369 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:32:06.546379 | orchestrator | 2026-02-19 03:32:06.546388 | orchestrator | 2026-02-19 03:32:06.546410 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:32:06.546418 | orchestrator | Thursday 19 February 2026 03:32:06 +0000 (0:00:00.855) 0:02:18.511 ***** 2026-02-19 03:32:06.546426 | orchestrator | =============================================================================== 2026-02-19 03:32:06.546434 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.27s 2026-02-19 03:32:06.546442 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.62s 2026-02-19 03:32:06.546449 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.41s 2026-02-19 03:32:06.546457 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.85s 2026-02-19 03:32:06.546465 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.80s 2026-02-19 03:32:06.546488 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.97s 2026-02-19 03:32:06.546496 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.96s 2026-02-19 03:32:06.546504 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.12s 2026-02-19 03:32:06.546512 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.58s 2026-02-19 03:32:06.546519 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.15s 2026-02-19 03:32:06.546527 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.60s 2026-02-19 03:32:06.546535 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.55s 2026-02-19 03:32:06.546578 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.54s 2026-02-19 03:32:06.546587 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2026-02-19 03:32:06.546595 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.49s 2026-02-19 03:32:06.546603 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.34s 2026-02-19 03:32:06.546611 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.30s 2026-02-19 03:32:06.546618 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.15s 2026-02-19 03:32:06.546626 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.14s 2026-02-19 03:32:06.546634 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.07s 2026-02-19 03:32:06.879986 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-19 03:32:06.880090 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-19 03:32:09.005950 | orchestrator | 2026-02-19 03:32:09 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-19 03:32:19.138160 | orchestrator | 2026-02-19 03:32:19 | INFO  | Task 2abddef8-2f3f-4f28-845b-38e3b61b8623 (wipe-partitions) was prepared for execution. 2026-02-19 03:32:19.138259 | orchestrator | 2026-02-19 03:32:19 | INFO  | It takes a moment until task 2abddef8-2f3f-4f28-845b-38e3b61b8623 (wipe-partitions) has been started and output is visible here. 2026-02-19 03:32:32.133512 | orchestrator | 2026-02-19 03:32:32.133715 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-19 03:32:32.133732 | orchestrator | 2026-02-19 03:32:32.133746 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-19 03:32:32.133757 | orchestrator | Thursday 19 February 2026 03:32:23 +0000 (0:00:00.125) 0:00:00.125 ***** 2026-02-19 03:32:32.133791 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:32:32.133804 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:32:32.133815 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:32:32.133826 | orchestrator | 2026-02-19 03:32:32.133837 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-19 03:32:32.133848 | orchestrator | Thursday 19 February 2026 03:32:23 +0000 (0:00:00.630) 0:00:00.755 ***** 2026-02-19 03:32:32.133859 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:32:32.133870 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:32:32.133881 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:32:32.133891 | orchestrator | 2026-02-19 03:32:32.133902 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-19 03:32:32.133913 | orchestrator | Thursday 19 February 2026 03:32:24 +0000 (0:00:00.387) 0:00:01.143 ***** 2026-02-19 03:32:32.133924 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:32:32.133936 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:32:32.133946 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:32:32.133957 | orchestrator | 2026-02-19 03:32:32.133968 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-19 03:32:32.133979 | orchestrator | Thursday 19 February 2026 03:32:24 +0000 (0:00:00.655) 0:00:01.798 ***** 2026-02-19 03:32:32.133989 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:32:32.134000 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:32:32.134012 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:32:32.134101 | orchestrator | 2026-02-19 03:32:32.134121 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-19 03:32:32.134140 | orchestrator | Thursday 19 February 2026 03:32:25 +0000 (0:00:00.258) 0:00:02.057 ***** 2026-02-19 03:32:32.134158 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-19 03:32:32.134171 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-19 03:32:32.134184 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-19 03:32:32.134196 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-19 03:32:32.134207 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-19 03:32:32.134218 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-19 03:32:32.134244 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-19 03:32:32.134255 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-19 03:32:32.134266 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-19 03:32:32.134276 | orchestrator | 2026-02-19 03:32:32.134287 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-19 03:32:32.134298 | orchestrator | Thursday 19 February 2026 03:32:26 +0000 (0:00:01.330) 0:00:03.388 ***** 2026-02-19 03:32:32.134309 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-19 03:32:32.134319 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-19 03:32:32.134330 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-19 03:32:32.134340 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-19 03:32:32.134351 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-19 03:32:32.134361 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-19 03:32:32.134372 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-19 03:32:32.134382 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-19 03:32:32.134393 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-19 03:32:32.134403 | orchestrator | 2026-02-19 03:32:32.134414 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-19 03:32:32.134424 | orchestrator | Thursday 19 February 2026 03:32:28 +0000 (0:00:01.737) 0:00:05.125 ***** 2026-02-19 03:32:32.134435 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-19 03:32:32.134445 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-19 03:32:32.134456 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-19 03:32:32.134467 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-19 03:32:32.134488 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-19 03:32:32.134498 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-19 03:32:32.134509 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-19 03:32:32.134519 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-19 03:32:32.134558 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-19 03:32:32.134569 | orchestrator | 2026-02-19 03:32:32.134580 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-19 03:32:32.134590 | orchestrator | Thursday 19 February 2026 03:32:30 +0000 (0:00:02.207) 0:00:07.333 ***** 2026-02-19 03:32:32.134601 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:32:32.134612 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:32:32.134622 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:32:32.134633 | orchestrator | 2026-02-19 03:32:32.134643 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-19 03:32:32.134654 | orchestrator | Thursday 19 February 2026 03:32:31 +0000 (0:00:00.673) 0:00:08.006 ***** 2026-02-19 03:32:32.134665 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:32:32.134675 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:32:32.134686 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:32:32.134696 | orchestrator | 2026-02-19 03:32:32.134707 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:32:32.134719 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:32.134731 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:32.134762 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:32.134774 | orchestrator | 2026-02-19 03:32:32.134785 | orchestrator | 2026-02-19 03:32:32.134818 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:32:32.134830 | orchestrator | Thursday 19 February 2026 03:32:31 +0000 (0:00:00.669) 0:00:08.676 ***** 2026-02-19 03:32:32.134841 | orchestrator | =============================================================================== 2026-02-19 03:32:32.134851 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.21s 2026-02-19 03:32:32.134862 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.74s 2026-02-19 03:32:32.134872 | orchestrator | Check device availability ----------------------------------------------- 1.33s 2026-02-19 03:32:32.134883 | orchestrator | Reload udev rules ------------------------------------------------------- 0.67s 2026-02-19 03:32:32.134894 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2026-02-19 03:32:32.134904 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.66s 2026-02-19 03:32:32.134915 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.63s 2026-02-19 03:32:32.134926 | orchestrator | Remove all rook related logical devices --------------------------------- 0.39s 2026-02-19 03:32:32.134937 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-02-19 03:32:44.640231 | orchestrator | 2026-02-19 03:32:44 | INFO  | Task baf095d0-2bdf-42eb-9d94-1721855fb611 (facts) was prepared for execution. 2026-02-19 03:32:44.640372 | orchestrator | 2026-02-19 03:32:44 | INFO  | It takes a moment until task baf095d0-2bdf-42eb-9d94-1721855fb611 (facts) has been started and output is visible here. 2026-02-19 03:32:58.298896 | orchestrator | 2026-02-19 03:32:58.298989 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-19 03:32:58.298999 | orchestrator | 2026-02-19 03:32:58.299007 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-19 03:32:58.299013 | orchestrator | Thursday 19 February 2026 03:32:48 +0000 (0:00:00.264) 0:00:00.264 ***** 2026-02-19 03:32:58.299039 | orchestrator | ok: [testbed-manager] 2026-02-19 03:32:58.299047 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:32:58.299053 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:32:58.299059 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:32:58.299065 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:32:58.299071 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:32:58.299077 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:32:58.299083 | orchestrator | 2026-02-19 03:32:58.299089 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-19 03:32:58.299096 | orchestrator | Thursday 19 February 2026 03:32:50 +0000 (0:00:01.112) 0:00:01.377 ***** 2026-02-19 03:32:58.299103 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:32:58.299110 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:32:58.299116 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:32:58.299122 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:32:58.299128 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:32:58.299134 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:32:58.299142 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:32:58.299153 | orchestrator | 2026-02-19 03:32:58.299164 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-19 03:32:58.299175 | orchestrator | 2026-02-19 03:32:58.299186 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-19 03:32:58.299197 | orchestrator | Thursday 19 February 2026 03:32:51 +0000 (0:00:01.210) 0:00:02.588 ***** 2026-02-19 03:32:58.299209 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:32:58.299220 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:32:58.299231 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:32:58.299238 | orchestrator | ok: [testbed-manager] 2026-02-19 03:32:58.299244 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:32:58.299250 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:32:58.299256 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:32:58.299262 | orchestrator | 2026-02-19 03:32:58.299268 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-19 03:32:58.299274 | orchestrator | 2026-02-19 03:32:58.299280 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-19 03:32:58.299286 | orchestrator | Thursday 19 February 2026 03:32:57 +0000 (0:00:05.998) 0:00:08.586 ***** 2026-02-19 03:32:58.299292 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:32:58.299298 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:32:58.299304 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:32:58.299310 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:32:58.299316 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:32:58.299321 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:32:58.299327 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:32:58.299333 | orchestrator | 2026-02-19 03:32:58.299339 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:32:58.299346 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:58.299387 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:58.299394 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:58.299401 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:58.299407 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:58.299413 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:58.299425 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:32:58.299432 | orchestrator | 2026-02-19 03:32:58.299438 | orchestrator | 2026-02-19 03:32:58.299444 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:32:58.299450 | orchestrator | Thursday 19 February 2026 03:32:57 +0000 (0:00:00.553) 0:00:09.140 ***** 2026-02-19 03:32:58.299456 | orchestrator | =============================================================================== 2026-02-19 03:32:58.299464 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.00s 2026-02-19 03:32:58.299471 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-02-19 03:32:58.299477 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-02-19 03:32:58.299484 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-19 03:33:00.692441 | orchestrator | 2026-02-19 03:33:00 | INFO  | Task 53a2679d-2869-40d8-a43e-5fecf8739543 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-19 03:33:00.692722 | orchestrator | 2026-02-19 03:33:00 | INFO  | It takes a moment until task 53a2679d-2869-40d8-a43e-5fecf8739543 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-19 03:33:12.527957 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-19 03:33:12.528089 | orchestrator | 2.16.14 2026-02-19 03:33:12.528113 | orchestrator | 2026-02-19 03:33:12.528133 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-19 03:33:12.528153 | orchestrator | 2026-02-19 03:33:12.528172 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-19 03:33:12.528190 | orchestrator | Thursday 19 February 2026 03:33:05 +0000 (0:00:00.318) 0:00:00.318 ***** 2026-02-19 03:33:12.528210 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 03:33:12.528229 | orchestrator | 2026-02-19 03:33:12.528270 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-19 03:33:12.528284 | orchestrator | Thursday 19 February 2026 03:33:05 +0000 (0:00:00.247) 0:00:00.565 ***** 2026-02-19 03:33:12.528295 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:33:12.528306 | orchestrator | 2026-02-19 03:33:12.528317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.528328 | orchestrator | Thursday 19 February 2026 03:33:05 +0000 (0:00:00.240) 0:00:00.806 ***** 2026-02-19 03:33:12.528339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-19 03:33:12.528350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-19 03:33:12.528360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-19 03:33:12.528371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-19 03:33:12.528383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-19 03:33:12.528402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-19 03:33:12.528421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-19 03:33:12.528439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-19 03:33:12.528456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-19 03:33:12.528473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-19 03:33:12.528525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-19 03:33:12.528546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-19 03:33:12.528598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-19 03:33:12.528619 | orchestrator | 2026-02-19 03:33:12.528639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.528659 | orchestrator | Thursday 19 February 2026 03:33:06 +0000 (0:00:00.485) 0:00:01.292 ***** 2026-02-19 03:33:12.528678 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.528698 | orchestrator | 2026-02-19 03:33:12.528718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.528739 | orchestrator | Thursday 19 February 2026 03:33:06 +0000 (0:00:00.201) 0:00:01.494 ***** 2026-02-19 03:33:12.528758 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.528777 | orchestrator | 2026-02-19 03:33:12.528793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.528810 | orchestrator | Thursday 19 February 2026 03:33:06 +0000 (0:00:00.213) 0:00:01.707 ***** 2026-02-19 03:33:12.528827 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.528843 | orchestrator | 2026-02-19 03:33:12.528860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.528876 | orchestrator | Thursday 19 February 2026 03:33:06 +0000 (0:00:00.200) 0:00:01.908 ***** 2026-02-19 03:33:12.528893 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.528909 | orchestrator | 2026-02-19 03:33:12.528926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.528942 | orchestrator | Thursday 19 February 2026 03:33:06 +0000 (0:00:00.194) 0:00:02.103 ***** 2026-02-19 03:33:12.528959 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.528976 | orchestrator | 2026-02-19 03:33:12.528992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.529008 | orchestrator | Thursday 19 February 2026 03:33:07 +0000 (0:00:00.201) 0:00:02.304 ***** 2026-02-19 03:33:12.529025 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.529041 | orchestrator | 2026-02-19 03:33:12.529058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.529074 | orchestrator | Thursday 19 February 2026 03:33:07 +0000 (0:00:00.204) 0:00:02.508 ***** 2026-02-19 03:33:12.529091 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.529107 | orchestrator | 2026-02-19 03:33:12.529124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.529141 | orchestrator | Thursday 19 February 2026 03:33:07 +0000 (0:00:00.204) 0:00:02.713 ***** 2026-02-19 03:33:12.529157 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.529174 | orchestrator | 2026-02-19 03:33:12.529190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.529208 | orchestrator | Thursday 19 February 2026 03:33:07 +0000 (0:00:00.192) 0:00:02.905 ***** 2026-02-19 03:33:12.529225 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9) 2026-02-19 03:33:12.529243 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9) 2026-02-19 03:33:12.529259 | orchestrator | 2026-02-19 03:33:12.529276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.529316 | orchestrator | Thursday 19 February 2026 03:33:08 +0000 (0:00:00.418) 0:00:03.324 ***** 2026-02-19 03:33:12.529334 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e) 2026-02-19 03:33:12.529350 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e) 2026-02-19 03:33:12.529367 | orchestrator | 2026-02-19 03:33:12.529384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.529401 | orchestrator | Thursday 19 February 2026 03:33:08 +0000 (0:00:00.610) 0:00:03.934 ***** 2026-02-19 03:33:12.529428 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8) 2026-02-19 03:33:12.529457 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8) 2026-02-19 03:33:12.529477 | orchestrator | 2026-02-19 03:33:12.529518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.529537 | orchestrator | Thursday 19 February 2026 03:33:09 +0000 (0:00:00.654) 0:00:04.588 ***** 2026-02-19 03:33:12.529555 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417) 2026-02-19 03:33:12.529574 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417) 2026-02-19 03:33:12.529591 | orchestrator | 2026-02-19 03:33:12.529608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:12.529625 | orchestrator | Thursday 19 February 2026 03:33:10 +0000 (0:00:00.873) 0:00:05.462 ***** 2026-02-19 03:33:12.529641 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-19 03:33:12.529659 | orchestrator | 2026-02-19 03:33:12.529675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:12.529692 | orchestrator | Thursday 19 February 2026 03:33:10 +0000 (0:00:00.346) 0:00:05.809 ***** 2026-02-19 03:33:12.529708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-19 03:33:12.529725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-19 03:33:12.529742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-19 03:33:12.529758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-19 03:33:12.529775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-19 03:33:12.529791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-19 03:33:12.529807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-19 03:33:12.529823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-19 03:33:12.529840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-19 03:33:12.529856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-19 03:33:12.529873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-19 03:33:12.529890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-19 03:33:12.529905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-19 03:33:12.529922 | orchestrator | 2026-02-19 03:33:12.529939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:12.529955 | orchestrator | Thursday 19 February 2026 03:33:11 +0000 (0:00:00.384) 0:00:06.193 ***** 2026-02-19 03:33:12.529972 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.529988 | orchestrator | 2026-02-19 03:33:12.530006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:12.530113 | orchestrator | Thursday 19 February 2026 03:33:11 +0000 (0:00:00.205) 0:00:06.399 ***** 2026-02-19 03:33:12.530133 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.530153 | orchestrator | 2026-02-19 03:33:12.530172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:12.530190 | orchestrator | Thursday 19 February 2026 03:33:11 +0000 (0:00:00.206) 0:00:06.605 ***** 2026-02-19 03:33:12.530209 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.530229 | orchestrator | 2026-02-19 03:33:12.530248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:12.530266 | orchestrator | Thursday 19 February 2026 03:33:11 +0000 (0:00:00.218) 0:00:06.824 ***** 2026-02-19 03:33:12.530286 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.530319 | orchestrator | 2026-02-19 03:33:12.530338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:12.530357 | orchestrator | Thursday 19 February 2026 03:33:11 +0000 (0:00:00.213) 0:00:07.038 ***** 2026-02-19 03:33:12.530373 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.530390 | orchestrator | 2026-02-19 03:33:12.530407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:12.530423 | orchestrator | Thursday 19 February 2026 03:33:12 +0000 (0:00:00.205) 0:00:07.243 ***** 2026-02-19 03:33:12.530439 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.530455 | orchestrator | 2026-02-19 03:33:12.530470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:12.530486 | orchestrator | Thursday 19 February 2026 03:33:12 +0000 (0:00:00.213) 0:00:07.456 ***** 2026-02-19 03:33:12.530554 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:12.530570 | orchestrator | 2026-02-19 03:33:12.530601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:20.408884 | orchestrator | Thursday 19 February 2026 03:33:12 +0000 (0:00:00.212) 0:00:07.669 ***** 2026-02-19 03:33:20.408980 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.408992 | orchestrator | 2026-02-19 03:33:20.409001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:20.409008 | orchestrator | Thursday 19 February 2026 03:33:12 +0000 (0:00:00.206) 0:00:07.876 ***** 2026-02-19 03:33:20.409026 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-19 03:33:20.409034 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-19 03:33:20.409051 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-19 03:33:20.409080 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-19 03:33:20.409088 | orchestrator | 2026-02-19 03:33:20.409095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:20.409101 | orchestrator | Thursday 19 February 2026 03:33:13 +0000 (0:00:01.039) 0:00:08.915 ***** 2026-02-19 03:33:20.409108 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409115 | orchestrator | 2026-02-19 03:33:20.409122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:20.409129 | orchestrator | Thursday 19 February 2026 03:33:13 +0000 (0:00:00.211) 0:00:09.126 ***** 2026-02-19 03:33:20.409135 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409142 | orchestrator | 2026-02-19 03:33:20.409149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:20.409155 | orchestrator | Thursday 19 February 2026 03:33:14 +0000 (0:00:00.210) 0:00:09.337 ***** 2026-02-19 03:33:20.409162 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409168 | orchestrator | 2026-02-19 03:33:20.409175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:20.409182 | orchestrator | Thursday 19 February 2026 03:33:14 +0000 (0:00:00.249) 0:00:09.587 ***** 2026-02-19 03:33:20.409189 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409195 | orchestrator | 2026-02-19 03:33:20.409202 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-19 03:33:20.409208 | orchestrator | Thursday 19 February 2026 03:33:14 +0000 (0:00:00.213) 0:00:09.801 ***** 2026-02-19 03:33:20.409215 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-19 03:33:20.409222 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-19 03:33:20.409229 | orchestrator | 2026-02-19 03:33:20.409235 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-19 03:33:20.409242 | orchestrator | Thursday 19 February 2026 03:33:14 +0000 (0:00:00.177) 0:00:09.978 ***** 2026-02-19 03:33:20.409248 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409255 | orchestrator | 2026-02-19 03:33:20.409262 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-19 03:33:20.409269 | orchestrator | Thursday 19 February 2026 03:33:14 +0000 (0:00:00.158) 0:00:10.136 ***** 2026-02-19 03:33:20.409293 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409301 | orchestrator | 2026-02-19 03:33:20.409307 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-19 03:33:20.409314 | orchestrator | Thursday 19 February 2026 03:33:15 +0000 (0:00:00.134) 0:00:10.271 ***** 2026-02-19 03:33:20.409321 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409327 | orchestrator | 2026-02-19 03:33:20.409334 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-19 03:33:20.409340 | orchestrator | Thursday 19 February 2026 03:33:15 +0000 (0:00:00.125) 0:00:10.396 ***** 2026-02-19 03:33:20.409347 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:33:20.409354 | orchestrator | 2026-02-19 03:33:20.409361 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-19 03:33:20.409367 | orchestrator | Thursday 19 February 2026 03:33:15 +0000 (0:00:00.175) 0:00:10.572 ***** 2026-02-19 03:33:20.409374 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc132c82-2da4-526a-8d14-ac4e81fe1159'}}) 2026-02-19 03:33:20.409382 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '900578fb-6201-5328-bc2d-5e3d92afe542'}}) 2026-02-19 03:33:20.409388 | orchestrator | 2026-02-19 03:33:20.409395 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-19 03:33:20.409401 | orchestrator | Thursday 19 February 2026 03:33:15 +0000 (0:00:00.182) 0:00:10.755 ***** 2026-02-19 03:33:20.409409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc132c82-2da4-526a-8d14-ac4e81fe1159'}})  2026-02-19 03:33:20.409418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '900578fb-6201-5328-bc2d-5e3d92afe542'}})  2026-02-19 03:33:20.409426 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409434 | orchestrator | 2026-02-19 03:33:20.409441 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-19 03:33:20.409449 | orchestrator | Thursday 19 February 2026 03:33:15 +0000 (0:00:00.381) 0:00:11.136 ***** 2026-02-19 03:33:20.409457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc132c82-2da4-526a-8d14-ac4e81fe1159'}})  2026-02-19 03:33:20.409465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '900578fb-6201-5328-bc2d-5e3d92afe542'}})  2026-02-19 03:33:20.409472 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409480 | orchestrator | 2026-02-19 03:33:20.409529 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-19 03:33:20.409537 | orchestrator | Thursday 19 February 2026 03:33:16 +0000 (0:00:00.153) 0:00:11.290 ***** 2026-02-19 03:33:20.409545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc132c82-2da4-526a-8d14-ac4e81fe1159'}})  2026-02-19 03:33:20.409567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '900578fb-6201-5328-bc2d-5e3d92afe542'}})  2026-02-19 03:33:20.409575 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409583 | orchestrator | 2026-02-19 03:33:20.409592 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-19 03:33:20.409600 | orchestrator | Thursday 19 February 2026 03:33:16 +0000 (0:00:00.184) 0:00:11.475 ***** 2026-02-19 03:33:20.409607 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:33:20.409616 | orchestrator | 2026-02-19 03:33:20.409623 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-19 03:33:20.409635 | orchestrator | Thursday 19 February 2026 03:33:16 +0000 (0:00:00.151) 0:00:11.627 ***** 2026-02-19 03:33:20.409644 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:33:20.409651 | orchestrator | 2026-02-19 03:33:20.409659 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-19 03:33:20.409666 | orchestrator | Thursday 19 February 2026 03:33:16 +0000 (0:00:00.147) 0:00:11.774 ***** 2026-02-19 03:33:20.409680 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409688 | orchestrator | 2026-02-19 03:33:20.409696 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-19 03:33:20.409703 | orchestrator | Thursday 19 February 2026 03:33:16 +0000 (0:00:00.138) 0:00:11.912 ***** 2026-02-19 03:33:20.409711 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409718 | orchestrator | 2026-02-19 03:33:20.409726 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-19 03:33:20.409734 | orchestrator | Thursday 19 February 2026 03:33:16 +0000 (0:00:00.142) 0:00:12.055 ***** 2026-02-19 03:33:20.409741 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409749 | orchestrator | 2026-02-19 03:33:20.409757 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-19 03:33:20.409764 | orchestrator | Thursday 19 February 2026 03:33:17 +0000 (0:00:00.148) 0:00:12.204 ***** 2026-02-19 03:33:20.409772 | orchestrator | ok: [testbed-node-3] => { 2026-02-19 03:33:20.409780 | orchestrator |  "ceph_osd_devices": { 2026-02-19 03:33:20.409788 | orchestrator |  "sdb": { 2026-02-19 03:33:20.409797 | orchestrator |  "osd_lvm_uuid": "dc132c82-2da4-526a-8d14-ac4e81fe1159" 2026-02-19 03:33:20.409808 | orchestrator |  }, 2026-02-19 03:33:20.409820 | orchestrator |  "sdc": { 2026-02-19 03:33:20.409832 | orchestrator |  "osd_lvm_uuid": "900578fb-6201-5328-bc2d-5e3d92afe542" 2026-02-19 03:33:20.409842 | orchestrator |  } 2026-02-19 03:33:20.409853 | orchestrator |  } 2026-02-19 03:33:20.409863 | orchestrator | } 2026-02-19 03:33:20.409874 | orchestrator | 2026-02-19 03:33:20.409884 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-19 03:33:20.409895 | orchestrator | Thursday 19 February 2026 03:33:17 +0000 (0:00:00.146) 0:00:12.351 ***** 2026-02-19 03:33:20.409906 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409915 | orchestrator | 2026-02-19 03:33:20.409926 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-19 03:33:20.409938 | orchestrator | Thursday 19 February 2026 03:33:17 +0000 (0:00:00.147) 0:00:12.499 ***** 2026-02-19 03:33:20.409948 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.409959 | orchestrator | 2026-02-19 03:33:20.409970 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-19 03:33:20.409981 | orchestrator | Thursday 19 February 2026 03:33:17 +0000 (0:00:00.137) 0:00:12.636 ***** 2026-02-19 03:33:20.409992 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:33:20.410004 | orchestrator | 2026-02-19 03:33:20.410073 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-19 03:33:20.410083 | orchestrator | Thursday 19 February 2026 03:33:17 +0000 (0:00:00.146) 0:00:12.783 ***** 2026-02-19 03:33:20.410090 | orchestrator | changed: [testbed-node-3] => { 2026-02-19 03:33:20.410096 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-19 03:33:20.410103 | orchestrator |  "ceph_osd_devices": { 2026-02-19 03:33:20.410110 | orchestrator |  "sdb": { 2026-02-19 03:33:20.410116 | orchestrator |  "osd_lvm_uuid": "dc132c82-2da4-526a-8d14-ac4e81fe1159" 2026-02-19 03:33:20.410123 | orchestrator |  }, 2026-02-19 03:33:20.410129 | orchestrator |  "sdc": { 2026-02-19 03:33:20.410136 | orchestrator |  "osd_lvm_uuid": "900578fb-6201-5328-bc2d-5e3d92afe542" 2026-02-19 03:33:20.410143 | orchestrator |  } 2026-02-19 03:33:20.410149 | orchestrator |  }, 2026-02-19 03:33:20.410156 | orchestrator |  "lvm_volumes": [ 2026-02-19 03:33:20.410162 | orchestrator |  { 2026-02-19 03:33:20.410169 | orchestrator |  "data": "osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159", 2026-02-19 03:33:20.410176 | orchestrator |  "data_vg": "ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159" 2026-02-19 03:33:20.410182 | orchestrator |  }, 2026-02-19 03:33:20.410189 | orchestrator |  { 2026-02-19 03:33:20.410198 | orchestrator |  "data": "osd-block-900578fb-6201-5328-bc2d-5e3d92afe542", 2026-02-19 03:33:20.410219 | orchestrator |  "data_vg": "ceph-900578fb-6201-5328-bc2d-5e3d92afe542" 2026-02-19 03:33:20.410230 | orchestrator |  } 2026-02-19 03:33:20.410241 | orchestrator |  ] 2026-02-19 03:33:20.410253 | orchestrator |  } 2026-02-19 03:33:20.410263 | orchestrator | } 2026-02-19 03:33:20.410274 | orchestrator | 2026-02-19 03:33:20.410285 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-19 03:33:20.410297 | orchestrator | Thursday 19 February 2026 03:33:18 +0000 (0:00:00.433) 0:00:13.216 ***** 2026-02-19 03:33:20.410308 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 03:33:20.410319 | orchestrator | 2026-02-19 03:33:20.410330 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-19 03:33:20.410337 | orchestrator | 2026-02-19 03:33:20.410344 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-19 03:33:20.410350 | orchestrator | Thursday 19 February 2026 03:33:19 +0000 (0:00:01.812) 0:00:15.029 ***** 2026-02-19 03:33:20.410357 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-19 03:33:20.410363 | orchestrator | 2026-02-19 03:33:20.410370 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-19 03:33:20.410376 | orchestrator | Thursday 19 February 2026 03:33:20 +0000 (0:00:00.264) 0:00:15.293 ***** 2026-02-19 03:33:20.410383 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:33:20.410389 | orchestrator | 2026-02-19 03:33:20.410405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658126 | orchestrator | Thursday 19 February 2026 03:33:20 +0000 (0:00:00.268) 0:00:15.561 ***** 2026-02-19 03:33:29.658235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-19 03:33:29.658251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-19 03:33:29.658262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-19 03:33:29.658290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-19 03:33:29.658301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-19 03:33:29.658313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-19 03:33:29.658324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-19 03:33:29.658334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-19 03:33:29.658346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-19 03:33:29.658357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-19 03:33:29.658367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-19 03:33:29.658378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-19 03:33:29.658389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-19 03:33:29.658400 | orchestrator | 2026-02-19 03:33:29.658412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658423 | orchestrator | Thursday 19 February 2026 03:33:20 +0000 (0:00:00.444) 0:00:16.006 ***** 2026-02-19 03:33:29.658434 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.658447 | orchestrator | 2026-02-19 03:33:29.658458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658468 | orchestrator | Thursday 19 February 2026 03:33:21 +0000 (0:00:00.217) 0:00:16.223 ***** 2026-02-19 03:33:29.658506 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.658518 | orchestrator | 2026-02-19 03:33:29.658529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658540 | orchestrator | Thursday 19 February 2026 03:33:21 +0000 (0:00:00.217) 0:00:16.441 ***** 2026-02-19 03:33:29.658574 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.658587 | orchestrator | 2026-02-19 03:33:29.658601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658613 | orchestrator | Thursday 19 February 2026 03:33:21 +0000 (0:00:00.194) 0:00:16.635 ***** 2026-02-19 03:33:29.658625 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.658637 | orchestrator | 2026-02-19 03:33:29.658649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658662 | orchestrator | Thursday 19 February 2026 03:33:22 +0000 (0:00:00.613) 0:00:17.249 ***** 2026-02-19 03:33:29.658674 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.658686 | orchestrator | 2026-02-19 03:33:29.658698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658711 | orchestrator | Thursday 19 February 2026 03:33:22 +0000 (0:00:00.229) 0:00:17.478 ***** 2026-02-19 03:33:29.658724 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.658736 | orchestrator | 2026-02-19 03:33:29.658747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658757 | orchestrator | Thursday 19 February 2026 03:33:22 +0000 (0:00:00.220) 0:00:17.699 ***** 2026-02-19 03:33:29.658768 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.658779 | orchestrator | 2026-02-19 03:33:29.658789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658800 | orchestrator | Thursday 19 February 2026 03:33:22 +0000 (0:00:00.205) 0:00:17.905 ***** 2026-02-19 03:33:29.658810 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.658821 | orchestrator | 2026-02-19 03:33:29.658831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658842 | orchestrator | Thursday 19 February 2026 03:33:22 +0000 (0:00:00.204) 0:00:18.109 ***** 2026-02-19 03:33:29.658853 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec) 2026-02-19 03:33:29.658864 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec) 2026-02-19 03:33:29.658876 | orchestrator | 2026-02-19 03:33:29.658887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658898 | orchestrator | Thursday 19 February 2026 03:33:23 +0000 (0:00:00.423) 0:00:18.533 ***** 2026-02-19 03:33:29.658908 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262) 2026-02-19 03:33:29.658919 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262) 2026-02-19 03:33:29.658930 | orchestrator | 2026-02-19 03:33:29.658941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.658952 | orchestrator | Thursday 19 February 2026 03:33:23 +0000 (0:00:00.461) 0:00:18.994 ***** 2026-02-19 03:33:29.658962 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0) 2026-02-19 03:33:29.658973 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0) 2026-02-19 03:33:29.658984 | orchestrator | 2026-02-19 03:33:29.658994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.659024 | orchestrator | Thursday 19 February 2026 03:33:24 +0000 (0:00:00.436) 0:00:19.431 ***** 2026-02-19 03:33:29.659036 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58) 2026-02-19 03:33:29.659047 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58) 2026-02-19 03:33:29.659058 | orchestrator | 2026-02-19 03:33:29.659068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:29.659085 | orchestrator | Thursday 19 February 2026 03:33:24 +0000 (0:00:00.654) 0:00:20.086 ***** 2026-02-19 03:33:29.659096 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-19 03:33:29.659115 | orchestrator | 2026-02-19 03:33:29.659125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659136 | orchestrator | Thursday 19 February 2026 03:33:25 +0000 (0:00:00.557) 0:00:20.643 ***** 2026-02-19 03:33:29.659147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-19 03:33:29.659157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-19 03:33:29.659168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-19 03:33:29.659178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-19 03:33:29.659189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-19 03:33:29.659199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-19 03:33:29.659210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-19 03:33:29.659220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-19 03:33:29.659230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-19 03:33:29.659241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-19 03:33:29.659252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-19 03:33:29.659263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-19 03:33:29.659273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-19 03:33:29.659284 | orchestrator | 2026-02-19 03:33:29.659295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659306 | orchestrator | Thursday 19 February 2026 03:33:26 +0000 (0:00:00.874) 0:00:21.517 ***** 2026-02-19 03:33:29.659316 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.659327 | orchestrator | 2026-02-19 03:33:29.659338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659348 | orchestrator | Thursday 19 February 2026 03:33:26 +0000 (0:00:00.221) 0:00:21.738 ***** 2026-02-19 03:33:29.659359 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.659370 | orchestrator | 2026-02-19 03:33:29.659380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659391 | orchestrator | Thursday 19 February 2026 03:33:26 +0000 (0:00:00.227) 0:00:21.966 ***** 2026-02-19 03:33:29.659401 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.659412 | orchestrator | 2026-02-19 03:33:29.659422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659433 | orchestrator | Thursday 19 February 2026 03:33:27 +0000 (0:00:00.235) 0:00:22.201 ***** 2026-02-19 03:33:29.659444 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.659454 | orchestrator | 2026-02-19 03:33:29.659465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659475 | orchestrator | Thursday 19 February 2026 03:33:27 +0000 (0:00:00.213) 0:00:22.415 ***** 2026-02-19 03:33:29.659540 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.659553 | orchestrator | 2026-02-19 03:33:29.659563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659574 | orchestrator | Thursday 19 February 2026 03:33:27 +0000 (0:00:00.215) 0:00:22.630 ***** 2026-02-19 03:33:29.659585 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.659595 | orchestrator | 2026-02-19 03:33:29.659606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659617 | orchestrator | Thursday 19 February 2026 03:33:27 +0000 (0:00:00.215) 0:00:22.845 ***** 2026-02-19 03:33:29.659627 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.659646 | orchestrator | 2026-02-19 03:33:29.659657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659667 | orchestrator | Thursday 19 February 2026 03:33:27 +0000 (0:00:00.212) 0:00:23.058 ***** 2026-02-19 03:33:29.659678 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:29.659689 | orchestrator | 2026-02-19 03:33:29.659700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659714 | orchestrator | Thursday 19 February 2026 03:33:28 +0000 (0:00:00.222) 0:00:23.281 ***** 2026-02-19 03:33:29.659733 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-19 03:33:29.659758 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-19 03:33:29.659782 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-19 03:33:29.659799 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-19 03:33:29.659815 | orchestrator | 2026-02-19 03:33:29.659833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:29.659850 | orchestrator | Thursday 19 February 2026 03:33:29 +0000 (0:00:00.895) 0:00:24.177 ***** 2026-02-19 03:33:29.659869 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.761700 | orchestrator | 2026-02-19 03:33:35.761829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:35.761853 | orchestrator | Thursday 19 February 2026 03:33:29 +0000 (0:00:00.632) 0:00:24.809 ***** 2026-02-19 03:33:35.761869 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.761886 | orchestrator | 2026-02-19 03:33:35.761901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:35.761916 | orchestrator | Thursday 19 February 2026 03:33:29 +0000 (0:00:00.212) 0:00:25.022 ***** 2026-02-19 03:33:35.761947 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.761957 | orchestrator | 2026-02-19 03:33:35.761965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:35.761974 | orchestrator | Thursday 19 February 2026 03:33:30 +0000 (0:00:00.215) 0:00:25.238 ***** 2026-02-19 03:33:35.761983 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.761992 | orchestrator | 2026-02-19 03:33:35.762000 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-19 03:33:35.762009 | orchestrator | Thursday 19 February 2026 03:33:30 +0000 (0:00:00.218) 0:00:25.457 ***** 2026-02-19 03:33:35.762070 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-19 03:33:35.762080 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-19 03:33:35.762089 | orchestrator | 2026-02-19 03:33:35.762102 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-19 03:33:35.762118 | orchestrator | Thursday 19 February 2026 03:33:30 +0000 (0:00:00.181) 0:00:25.639 ***** 2026-02-19 03:33:35.762133 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762150 | orchestrator | 2026-02-19 03:33:35.762166 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-19 03:33:35.762182 | orchestrator | Thursday 19 February 2026 03:33:30 +0000 (0:00:00.133) 0:00:25.772 ***** 2026-02-19 03:33:35.762196 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762209 | orchestrator | 2026-02-19 03:33:35.762235 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-19 03:33:35.762250 | orchestrator | Thursday 19 February 2026 03:33:30 +0000 (0:00:00.145) 0:00:25.917 ***** 2026-02-19 03:33:35.762264 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762278 | orchestrator | 2026-02-19 03:33:35.762292 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-19 03:33:35.762306 | orchestrator | Thursday 19 February 2026 03:33:30 +0000 (0:00:00.148) 0:00:26.066 ***** 2026-02-19 03:33:35.762322 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:33:35.762337 | orchestrator | 2026-02-19 03:33:35.762352 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-19 03:33:35.762368 | orchestrator | Thursday 19 February 2026 03:33:31 +0000 (0:00:00.131) 0:00:26.197 ***** 2026-02-19 03:33:35.762408 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}}) 2026-02-19 03:33:35.762425 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}}) 2026-02-19 03:33:35.762438 | orchestrator | 2026-02-19 03:33:35.762449 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-19 03:33:35.762459 | orchestrator | Thursday 19 February 2026 03:33:31 +0000 (0:00:00.192) 0:00:26.390 ***** 2026-02-19 03:33:35.762470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}})  2026-02-19 03:33:35.762542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}})  2026-02-19 03:33:35.762558 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762574 | orchestrator | 2026-02-19 03:33:35.762589 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-19 03:33:35.762604 | orchestrator | Thursday 19 February 2026 03:33:31 +0000 (0:00:00.161) 0:00:26.552 ***** 2026-02-19 03:33:35.762614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}})  2026-02-19 03:33:35.762623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}})  2026-02-19 03:33:35.762632 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762640 | orchestrator | 2026-02-19 03:33:35.762649 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-19 03:33:35.762658 | orchestrator | Thursday 19 February 2026 03:33:31 +0000 (0:00:00.431) 0:00:26.983 ***** 2026-02-19 03:33:35.762666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}})  2026-02-19 03:33:35.762675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}})  2026-02-19 03:33:35.762684 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762692 | orchestrator | 2026-02-19 03:33:35.762701 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-19 03:33:35.762710 | orchestrator | Thursday 19 February 2026 03:33:31 +0000 (0:00:00.166) 0:00:27.150 ***** 2026-02-19 03:33:35.762718 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:33:35.762727 | orchestrator | 2026-02-19 03:33:35.762735 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-19 03:33:35.762744 | orchestrator | Thursday 19 February 2026 03:33:32 +0000 (0:00:00.156) 0:00:27.307 ***** 2026-02-19 03:33:35.762753 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:33:35.762761 | orchestrator | 2026-02-19 03:33:35.762772 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-19 03:33:35.762787 | orchestrator | Thursday 19 February 2026 03:33:32 +0000 (0:00:00.151) 0:00:27.458 ***** 2026-02-19 03:33:35.762825 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762841 | orchestrator | 2026-02-19 03:33:35.762854 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-19 03:33:35.762863 | orchestrator | Thursday 19 February 2026 03:33:32 +0000 (0:00:00.140) 0:00:27.598 ***** 2026-02-19 03:33:35.762872 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762880 | orchestrator | 2026-02-19 03:33:35.762889 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-19 03:33:35.762897 | orchestrator | Thursday 19 February 2026 03:33:32 +0000 (0:00:00.152) 0:00:27.751 ***** 2026-02-19 03:33:35.762914 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.762923 | orchestrator | 2026-02-19 03:33:35.762931 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-19 03:33:35.762940 | orchestrator | Thursday 19 February 2026 03:33:32 +0000 (0:00:00.130) 0:00:27.882 ***** 2026-02-19 03:33:35.762957 | orchestrator | ok: [testbed-node-4] => { 2026-02-19 03:33:35.762966 | orchestrator |  "ceph_osd_devices": { 2026-02-19 03:33:35.762975 | orchestrator |  "sdb": { 2026-02-19 03:33:35.762984 | orchestrator |  "osd_lvm_uuid": "64a1f4ab-0c55-53ad-929a-fda4cfe46a02" 2026-02-19 03:33:35.762992 | orchestrator |  }, 2026-02-19 03:33:35.763001 | orchestrator |  "sdc": { 2026-02-19 03:33:35.763009 | orchestrator |  "osd_lvm_uuid": "ac535f4d-dfa1-5efd-bfb5-368e6c7a2160" 2026-02-19 03:33:35.763018 | orchestrator |  } 2026-02-19 03:33:35.763027 | orchestrator |  } 2026-02-19 03:33:35.763039 | orchestrator | } 2026-02-19 03:33:35.763054 | orchestrator | 2026-02-19 03:33:35.763069 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-19 03:33:35.763084 | orchestrator | Thursday 19 February 2026 03:33:32 +0000 (0:00:00.149) 0:00:28.032 ***** 2026-02-19 03:33:35.763099 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.763111 | orchestrator | 2026-02-19 03:33:35.763120 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-19 03:33:35.763129 | orchestrator | Thursday 19 February 2026 03:33:33 +0000 (0:00:00.162) 0:00:28.194 ***** 2026-02-19 03:33:35.763137 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.763146 | orchestrator | 2026-02-19 03:33:35.763154 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-19 03:33:35.763162 | orchestrator | Thursday 19 February 2026 03:33:33 +0000 (0:00:00.155) 0:00:28.350 ***** 2026-02-19 03:33:35.763171 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:33:35.763179 | orchestrator | 2026-02-19 03:33:35.763188 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-19 03:33:35.763196 | orchestrator | Thursday 19 February 2026 03:33:33 +0000 (0:00:00.146) 0:00:28.496 ***** 2026-02-19 03:33:35.763205 | orchestrator | changed: [testbed-node-4] => { 2026-02-19 03:33:35.763213 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-19 03:33:35.763222 | orchestrator |  "ceph_osd_devices": { 2026-02-19 03:33:35.763231 | orchestrator |  "sdb": { 2026-02-19 03:33:35.763239 | orchestrator |  "osd_lvm_uuid": "64a1f4ab-0c55-53ad-929a-fda4cfe46a02" 2026-02-19 03:33:35.763248 | orchestrator |  }, 2026-02-19 03:33:35.763256 | orchestrator |  "sdc": { 2026-02-19 03:33:35.763265 | orchestrator |  "osd_lvm_uuid": "ac535f4d-dfa1-5efd-bfb5-368e6c7a2160" 2026-02-19 03:33:35.763273 | orchestrator |  } 2026-02-19 03:33:35.763282 | orchestrator |  }, 2026-02-19 03:33:35.763290 | orchestrator |  "lvm_volumes": [ 2026-02-19 03:33:35.763299 | orchestrator |  { 2026-02-19 03:33:35.763307 | orchestrator |  "data": "osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02", 2026-02-19 03:33:35.763316 | orchestrator |  "data_vg": "ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02" 2026-02-19 03:33:35.763324 | orchestrator |  }, 2026-02-19 03:33:35.763333 | orchestrator |  { 2026-02-19 03:33:35.763341 | orchestrator |  "data": "osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160", 2026-02-19 03:33:35.763350 | orchestrator |  "data_vg": "ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160" 2026-02-19 03:33:35.763358 | orchestrator |  } 2026-02-19 03:33:35.763367 | orchestrator |  ] 2026-02-19 03:33:35.763375 | orchestrator |  } 2026-02-19 03:33:35.763384 | orchestrator | } 2026-02-19 03:33:35.763392 | orchestrator | 2026-02-19 03:33:35.763401 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-19 03:33:35.763410 | orchestrator | Thursday 19 February 2026 03:33:33 +0000 (0:00:00.410) 0:00:28.907 ***** 2026-02-19 03:33:35.763418 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-19 03:33:35.763427 | orchestrator | 2026-02-19 03:33:35.763435 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-19 03:33:35.763444 | orchestrator | 2026-02-19 03:33:35.763452 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-19 03:33:35.763461 | orchestrator | Thursday 19 February 2026 03:33:34 +0000 (0:00:01.132) 0:00:30.040 ***** 2026-02-19 03:33:35.763501 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-19 03:33:35.763511 | orchestrator | 2026-02-19 03:33:35.763520 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-19 03:33:35.763528 | orchestrator | Thursday 19 February 2026 03:33:35 +0000 (0:00:00.257) 0:00:30.297 ***** 2026-02-19 03:33:35.763537 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:33:35.763545 | orchestrator | 2026-02-19 03:33:35.763554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:35.763562 | orchestrator | Thursday 19 February 2026 03:33:35 +0000 (0:00:00.234) 0:00:30.531 ***** 2026-02-19 03:33:35.763571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-19 03:33:35.763579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-19 03:33:35.763588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-19 03:33:35.763596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-19 03:33:35.763604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-19 03:33:35.763620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-19 03:33:44.743637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-19 03:33:44.743774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-19 03:33:44.743799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-19 03:33:44.743816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-19 03:33:44.743852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-19 03:33:44.743871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-19 03:33:44.743887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-19 03:33:44.743905 | orchestrator | 2026-02-19 03:33:44.743923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.743934 | orchestrator | Thursday 19 February 2026 03:33:35 +0000 (0:00:00.378) 0:00:30.910 ***** 2026-02-19 03:33:44.743944 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.743955 | orchestrator | 2026-02-19 03:33:44.743966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.743976 | orchestrator | Thursday 19 February 2026 03:33:35 +0000 (0:00:00.227) 0:00:31.138 ***** 2026-02-19 03:33:44.743986 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.743995 | orchestrator | 2026-02-19 03:33:44.744011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744028 | orchestrator | Thursday 19 February 2026 03:33:36 +0000 (0:00:00.199) 0:00:31.338 ***** 2026-02-19 03:33:44.744044 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744061 | orchestrator | 2026-02-19 03:33:44.744073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744085 | orchestrator | Thursday 19 February 2026 03:33:36 +0000 (0:00:00.191) 0:00:31.529 ***** 2026-02-19 03:33:44.744096 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744107 | orchestrator | 2026-02-19 03:33:44.744118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744130 | orchestrator | Thursday 19 February 2026 03:33:37 +0000 (0:00:00.638) 0:00:32.167 ***** 2026-02-19 03:33:44.744141 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744152 | orchestrator | 2026-02-19 03:33:44.744163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744174 | orchestrator | Thursday 19 February 2026 03:33:37 +0000 (0:00:00.210) 0:00:32.377 ***** 2026-02-19 03:33:44.744210 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744221 | orchestrator | 2026-02-19 03:33:44.744231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744240 | orchestrator | Thursday 19 February 2026 03:33:37 +0000 (0:00:00.212) 0:00:32.589 ***** 2026-02-19 03:33:44.744250 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744260 | orchestrator | 2026-02-19 03:33:44.744269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744279 | orchestrator | Thursday 19 February 2026 03:33:37 +0000 (0:00:00.228) 0:00:32.818 ***** 2026-02-19 03:33:44.744288 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744298 | orchestrator | 2026-02-19 03:33:44.744308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744317 | orchestrator | Thursday 19 February 2026 03:33:37 +0000 (0:00:00.221) 0:00:33.039 ***** 2026-02-19 03:33:44.744327 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf) 2026-02-19 03:33:44.744338 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf) 2026-02-19 03:33:44.744348 | orchestrator | 2026-02-19 03:33:44.744357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744367 | orchestrator | Thursday 19 February 2026 03:33:38 +0000 (0:00:00.452) 0:00:33.492 ***** 2026-02-19 03:33:44.744376 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42) 2026-02-19 03:33:44.744386 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42) 2026-02-19 03:33:44.744396 | orchestrator | 2026-02-19 03:33:44.744405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744415 | orchestrator | Thursday 19 February 2026 03:33:38 +0000 (0:00:00.438) 0:00:33.930 ***** 2026-02-19 03:33:44.744424 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46) 2026-02-19 03:33:44.744434 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46) 2026-02-19 03:33:44.744444 | orchestrator | 2026-02-19 03:33:44.744456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744498 | orchestrator | Thursday 19 February 2026 03:33:39 +0000 (0:00:00.422) 0:00:34.352 ***** 2026-02-19 03:33:44.744516 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b) 2026-02-19 03:33:44.744532 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b) 2026-02-19 03:33:44.744547 | orchestrator | 2026-02-19 03:33:44.744563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:33:44.744579 | orchestrator | Thursday 19 February 2026 03:33:39 +0000 (0:00:00.434) 0:00:34.787 ***** 2026-02-19 03:33:44.744595 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-19 03:33:44.744609 | orchestrator | 2026-02-19 03:33:44.744624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.744665 | orchestrator | Thursday 19 February 2026 03:33:39 +0000 (0:00:00.342) 0:00:35.129 ***** 2026-02-19 03:33:44.744683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-19 03:33:44.744700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-19 03:33:44.744717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-19 03:33:44.744744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-19 03:33:44.744755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-19 03:33:44.744764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-19 03:33:44.744784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-19 03:33:44.744794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-19 03:33:44.744803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-19 03:33:44.744813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-19 03:33:44.744823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-19 03:33:44.744832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-19 03:33:44.744842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-19 03:33:44.744851 | orchestrator | 2026-02-19 03:33:44.744861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.744870 | orchestrator | Thursday 19 February 2026 03:33:40 +0000 (0:00:00.656) 0:00:35.785 ***** 2026-02-19 03:33:44.744880 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744889 | orchestrator | 2026-02-19 03:33:44.744899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.744909 | orchestrator | Thursday 19 February 2026 03:33:40 +0000 (0:00:00.240) 0:00:36.026 ***** 2026-02-19 03:33:44.744918 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744928 | orchestrator | 2026-02-19 03:33:44.744937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.744947 | orchestrator | Thursday 19 February 2026 03:33:41 +0000 (0:00:00.246) 0:00:36.273 ***** 2026-02-19 03:33:44.744956 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.744966 | orchestrator | 2026-02-19 03:33:44.744976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.744985 | orchestrator | Thursday 19 February 2026 03:33:41 +0000 (0:00:00.229) 0:00:36.502 ***** 2026-02-19 03:33:44.744995 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745005 | orchestrator | 2026-02-19 03:33:44.745014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745024 | orchestrator | Thursday 19 February 2026 03:33:41 +0000 (0:00:00.216) 0:00:36.719 ***** 2026-02-19 03:33:44.745033 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745043 | orchestrator | 2026-02-19 03:33:44.745052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745062 | orchestrator | Thursday 19 February 2026 03:33:41 +0000 (0:00:00.213) 0:00:36.933 ***** 2026-02-19 03:33:44.745072 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745081 | orchestrator | 2026-02-19 03:33:44.745091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745100 | orchestrator | Thursday 19 February 2026 03:33:41 +0000 (0:00:00.205) 0:00:37.139 ***** 2026-02-19 03:33:44.745110 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745119 | orchestrator | 2026-02-19 03:33:44.745129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745139 | orchestrator | Thursday 19 February 2026 03:33:42 +0000 (0:00:00.214) 0:00:37.353 ***** 2026-02-19 03:33:44.745148 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745158 | orchestrator | 2026-02-19 03:33:44.745167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745177 | orchestrator | Thursday 19 February 2026 03:33:42 +0000 (0:00:00.216) 0:00:37.569 ***** 2026-02-19 03:33:44.745212 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-19 03:33:44.745222 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-19 03:33:44.745232 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-19 03:33:44.745242 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-19 03:33:44.745252 | orchestrator | 2026-02-19 03:33:44.745268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745278 | orchestrator | Thursday 19 February 2026 03:33:43 +0000 (0:00:00.922) 0:00:38.492 ***** 2026-02-19 03:33:44.745288 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745298 | orchestrator | 2026-02-19 03:33:44.745307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745317 | orchestrator | Thursday 19 February 2026 03:33:43 +0000 (0:00:00.270) 0:00:38.762 ***** 2026-02-19 03:33:44.745326 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745336 | orchestrator | 2026-02-19 03:33:44.745346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745355 | orchestrator | Thursday 19 February 2026 03:33:43 +0000 (0:00:00.208) 0:00:38.971 ***** 2026-02-19 03:33:44.745365 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745374 | orchestrator | 2026-02-19 03:33:44.745384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:33:44.745394 | orchestrator | Thursday 19 February 2026 03:33:44 +0000 (0:00:00.717) 0:00:39.689 ***** 2026-02-19 03:33:44.745404 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:44.745413 | orchestrator | 2026-02-19 03:33:44.745430 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-19 03:33:49.116904 | orchestrator | Thursday 19 February 2026 03:33:44 +0000 (0:00:00.206) 0:00:39.896 ***** 2026-02-19 03:33:49.116976 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-19 03:33:49.116982 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-19 03:33:49.116986 | orchestrator | 2026-02-19 03:33:49.116991 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-19 03:33:49.117009 | orchestrator | Thursday 19 February 2026 03:33:44 +0000 (0:00:00.179) 0:00:40.075 ***** 2026-02-19 03:33:49.117013 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117018 | orchestrator | 2026-02-19 03:33:49.117022 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-19 03:33:49.117026 | orchestrator | Thursday 19 February 2026 03:33:45 +0000 (0:00:00.141) 0:00:40.216 ***** 2026-02-19 03:33:49.117030 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117034 | orchestrator | 2026-02-19 03:33:49.117038 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-19 03:33:49.117042 | orchestrator | Thursday 19 February 2026 03:33:45 +0000 (0:00:00.138) 0:00:40.355 ***** 2026-02-19 03:33:49.117046 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117050 | orchestrator | 2026-02-19 03:33:49.117054 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-19 03:33:49.117058 | orchestrator | Thursday 19 February 2026 03:33:45 +0000 (0:00:00.138) 0:00:40.494 ***** 2026-02-19 03:33:49.117062 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:33:49.117067 | orchestrator | 2026-02-19 03:33:49.117071 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-19 03:33:49.117075 | orchestrator | Thursday 19 February 2026 03:33:45 +0000 (0:00:00.142) 0:00:40.636 ***** 2026-02-19 03:33:49.117079 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98b2861f-503b-5d91-adc9-6468e68ac210'}}) 2026-02-19 03:33:49.117084 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3bb39c06-9317-5e70-9108-eeec2efc4456'}}) 2026-02-19 03:33:49.117088 | orchestrator | 2026-02-19 03:33:49.117092 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-19 03:33:49.117095 | orchestrator | Thursday 19 February 2026 03:33:45 +0000 (0:00:00.178) 0:00:40.815 ***** 2026-02-19 03:33:49.117100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98b2861f-503b-5d91-adc9-6468e68ac210'}})  2026-02-19 03:33:49.117105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3bb39c06-9317-5e70-9108-eeec2efc4456'}})  2026-02-19 03:33:49.117109 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117127 | orchestrator | 2026-02-19 03:33:49.117131 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-19 03:33:49.117135 | orchestrator | Thursday 19 February 2026 03:33:45 +0000 (0:00:00.150) 0:00:40.965 ***** 2026-02-19 03:33:49.117139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98b2861f-503b-5d91-adc9-6468e68ac210'}})  2026-02-19 03:33:49.117143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3bb39c06-9317-5e70-9108-eeec2efc4456'}})  2026-02-19 03:33:49.117147 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117151 | orchestrator | 2026-02-19 03:33:49.117155 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-19 03:33:49.117159 | orchestrator | Thursday 19 February 2026 03:33:45 +0000 (0:00:00.179) 0:00:41.145 ***** 2026-02-19 03:33:49.117163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98b2861f-503b-5d91-adc9-6468e68ac210'}})  2026-02-19 03:33:49.117167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3bb39c06-9317-5e70-9108-eeec2efc4456'}})  2026-02-19 03:33:49.117171 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117175 | orchestrator | 2026-02-19 03:33:49.117179 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-19 03:33:49.117182 | orchestrator | Thursday 19 February 2026 03:33:46 +0000 (0:00:00.160) 0:00:41.305 ***** 2026-02-19 03:33:49.117186 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:33:49.117190 | orchestrator | 2026-02-19 03:33:49.117194 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-19 03:33:49.117198 | orchestrator | Thursday 19 February 2026 03:33:46 +0000 (0:00:00.178) 0:00:41.484 ***** 2026-02-19 03:33:49.117202 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:33:49.117206 | orchestrator | 2026-02-19 03:33:49.117210 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-19 03:33:49.117214 | orchestrator | Thursday 19 February 2026 03:33:46 +0000 (0:00:00.393) 0:00:41.878 ***** 2026-02-19 03:33:49.117218 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117222 | orchestrator | 2026-02-19 03:33:49.117225 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-19 03:33:49.117229 | orchestrator | Thursday 19 February 2026 03:33:46 +0000 (0:00:00.148) 0:00:42.027 ***** 2026-02-19 03:33:49.117233 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117237 | orchestrator | 2026-02-19 03:33:49.117241 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-19 03:33:49.117245 | orchestrator | Thursday 19 February 2026 03:33:47 +0000 (0:00:00.140) 0:00:42.168 ***** 2026-02-19 03:33:49.117249 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117253 | orchestrator | 2026-02-19 03:33:49.117257 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-19 03:33:49.117261 | orchestrator | Thursday 19 February 2026 03:33:47 +0000 (0:00:00.147) 0:00:42.316 ***** 2026-02-19 03:33:49.117265 | orchestrator | ok: [testbed-node-5] => { 2026-02-19 03:33:49.117269 | orchestrator |  "ceph_osd_devices": { 2026-02-19 03:33:49.117273 | orchestrator |  "sdb": { 2026-02-19 03:33:49.117288 | orchestrator |  "osd_lvm_uuid": "98b2861f-503b-5d91-adc9-6468e68ac210" 2026-02-19 03:33:49.117292 | orchestrator |  }, 2026-02-19 03:33:49.117296 | orchestrator |  "sdc": { 2026-02-19 03:33:49.117300 | orchestrator |  "osd_lvm_uuid": "3bb39c06-9317-5e70-9108-eeec2efc4456" 2026-02-19 03:33:49.117304 | orchestrator |  } 2026-02-19 03:33:49.117308 | orchestrator |  } 2026-02-19 03:33:49.117312 | orchestrator | } 2026-02-19 03:33:49.117316 | orchestrator | 2026-02-19 03:33:49.117320 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-19 03:33:49.117327 | orchestrator | Thursday 19 February 2026 03:33:47 +0000 (0:00:00.146) 0:00:42.462 ***** 2026-02-19 03:33:49.117331 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117339 | orchestrator | 2026-02-19 03:33:49.117342 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-19 03:33:49.117346 | orchestrator | Thursday 19 February 2026 03:33:47 +0000 (0:00:00.157) 0:00:42.620 ***** 2026-02-19 03:33:49.117350 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117354 | orchestrator | 2026-02-19 03:33:49.117358 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-19 03:33:49.117362 | orchestrator | Thursday 19 February 2026 03:33:47 +0000 (0:00:00.222) 0:00:42.842 ***** 2026-02-19 03:33:49.117365 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:33:49.117369 | orchestrator | 2026-02-19 03:33:49.117373 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-19 03:33:49.117377 | orchestrator | Thursday 19 February 2026 03:33:47 +0000 (0:00:00.147) 0:00:42.990 ***** 2026-02-19 03:33:49.117381 | orchestrator | changed: [testbed-node-5] => { 2026-02-19 03:33:49.117385 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-19 03:33:49.117389 | orchestrator |  "ceph_osd_devices": { 2026-02-19 03:33:49.117393 | orchestrator |  "sdb": { 2026-02-19 03:33:49.117397 | orchestrator |  "osd_lvm_uuid": "98b2861f-503b-5d91-adc9-6468e68ac210" 2026-02-19 03:33:49.117401 | orchestrator |  }, 2026-02-19 03:33:49.117404 | orchestrator |  "sdc": { 2026-02-19 03:33:49.117408 | orchestrator |  "osd_lvm_uuid": "3bb39c06-9317-5e70-9108-eeec2efc4456" 2026-02-19 03:33:49.117412 | orchestrator |  } 2026-02-19 03:33:49.117416 | orchestrator |  }, 2026-02-19 03:33:49.117420 | orchestrator |  "lvm_volumes": [ 2026-02-19 03:33:49.117424 | orchestrator |  { 2026-02-19 03:33:49.117428 | orchestrator |  "data": "osd-block-98b2861f-503b-5d91-adc9-6468e68ac210", 2026-02-19 03:33:49.117432 | orchestrator |  "data_vg": "ceph-98b2861f-503b-5d91-adc9-6468e68ac210" 2026-02-19 03:33:49.117436 | orchestrator |  }, 2026-02-19 03:33:49.117439 | orchestrator |  { 2026-02-19 03:33:49.117443 | orchestrator |  "data": "osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456", 2026-02-19 03:33:49.117447 | orchestrator |  "data_vg": "ceph-3bb39c06-9317-5e70-9108-eeec2efc4456" 2026-02-19 03:33:49.117451 | orchestrator |  } 2026-02-19 03:33:49.117455 | orchestrator |  ] 2026-02-19 03:33:49.117459 | orchestrator |  } 2026-02-19 03:33:49.117462 | orchestrator | } 2026-02-19 03:33:49.117466 | orchestrator | 2026-02-19 03:33:49.117522 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-19 03:33:49.117527 | orchestrator | Thursday 19 February 2026 03:33:48 +0000 (0:00:00.216) 0:00:43.206 ***** 2026-02-19 03:33:49.117531 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-19 03:33:49.117535 | orchestrator | 2026-02-19 03:33:49.117540 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:33:49.117544 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-19 03:33:49.117550 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-19 03:33:49.117555 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-19 03:33:49.117559 | orchestrator | 2026-02-19 03:33:49.117563 | orchestrator | 2026-02-19 03:33:49.117568 | orchestrator | 2026-02-19 03:33:49.117572 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:33:49.117577 | orchestrator | Thursday 19 February 2026 03:33:49 +0000 (0:00:01.052) 0:00:44.259 ***** 2026-02-19 03:33:49.117581 | orchestrator | =============================================================================== 2026-02-19 03:33:49.117586 | orchestrator | Write configuration file ------------------------------------------------ 4.00s 2026-02-19 03:33:49.117590 | orchestrator | Add known partitions to the list of available block devices ------------- 1.91s 2026-02-19 03:33:49.117599 | orchestrator | Add known links to the list of available block devices ------------------ 1.31s 2026-02-19 03:33:49.117603 | orchestrator | Print configuration data ------------------------------------------------ 1.06s 2026-02-19 03:33:49.117607 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-02-19 03:33:49.117611 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-02-19 03:33:49.117615 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2026-02-19 03:33:49.117619 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-02-19 03:33:49.117623 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-02-19 03:33:49.117627 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.76s 2026-02-19 03:33:49.117631 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-02-19 03:33:49.117635 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-19 03:33:49.117638 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.69s 2026-02-19 03:33:49.117645 | orchestrator | Set OSD devices config data --------------------------------------------- 0.69s 2026-02-19 03:33:49.562730 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-19 03:33:49.562839 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-19 03:33:49.562854 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-02-19 03:33:49.562891 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-02-19 03:33:49.562912 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-19 03:33:49.562930 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-19 03:34:12.138959 | orchestrator | 2026-02-19 03:34:12 | INFO  | Task d23adc0d-77ef-4d2e-b6f3-42b2e6a312d1 (sync inventory) is running in background. Output coming soon. 2026-02-19 03:34:41.040315 | orchestrator | 2026-02-19 03:34:13 | INFO  | Starting group_vars file reorganization 2026-02-19 03:34:41.040485 | orchestrator | 2026-02-19 03:34:13 | INFO  | Moved 0 file(s) to their respective directories 2026-02-19 03:34:41.040508 | orchestrator | 2026-02-19 03:34:13 | INFO  | Group_vars file reorganization completed 2026-02-19 03:34:41.040524 | orchestrator | 2026-02-19 03:34:16 | INFO  | Starting variable preparation from inventory 2026-02-19 03:34:41.040539 | orchestrator | 2026-02-19 03:34:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-19 03:34:41.040551 | orchestrator | 2026-02-19 03:34:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-19 03:34:41.040560 | orchestrator | 2026-02-19 03:34:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-19 03:34:41.040568 | orchestrator | 2026-02-19 03:34:19 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-19 03:34:41.040576 | orchestrator | 2026-02-19 03:34:19 | INFO  | Variable preparation completed 2026-02-19 03:34:41.040583 | orchestrator | 2026-02-19 03:34:21 | INFO  | Starting inventory overwrite handling 2026-02-19 03:34:41.040610 | orchestrator | 2026-02-19 03:34:21 | INFO  | Handling group overwrites in 99-overwrite 2026-02-19 03:34:41.040618 | orchestrator | 2026-02-19 03:34:21 | INFO  | Removing group frr:children from 60-generic 2026-02-19 03:34:41.040626 | orchestrator | 2026-02-19 03:34:21 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-19 03:34:41.040633 | orchestrator | 2026-02-19 03:34:21 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-19 03:34:41.040669 | orchestrator | 2026-02-19 03:34:21 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-19 03:34:41.040677 | orchestrator | 2026-02-19 03:34:21 | INFO  | Handling group overwrites in 20-roles 2026-02-19 03:34:41.040684 | orchestrator | 2026-02-19 03:34:21 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-19 03:34:41.040692 | orchestrator | 2026-02-19 03:34:21 | INFO  | Removed 5 group(s) in total 2026-02-19 03:34:41.040699 | orchestrator | 2026-02-19 03:34:21 | INFO  | Inventory overwrite handling completed 2026-02-19 03:34:41.040706 | orchestrator | 2026-02-19 03:34:22 | INFO  | Starting merge of inventory files 2026-02-19 03:34:41.040714 | orchestrator | 2026-02-19 03:34:22 | INFO  | Inventory files merged successfully 2026-02-19 03:34:41.040721 | orchestrator | 2026-02-19 03:34:27 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-19 03:34:41.040728 | orchestrator | 2026-02-19 03:34:39 | INFO  | Successfully wrote ClusterShell configuration 2026-02-19 03:34:41.040736 | orchestrator | [master 86b8411] 2026-02-19-03-34 2026-02-19 03:34:41.040745 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-19 03:34:43.346598 | orchestrator | 2026-02-19 03:34:43 | INFO  | Task 7e7a899b-dd5b-4271-8dfb-d74806b40404 (ceph-create-lvm-devices) was prepared for execution. 2026-02-19 03:34:43.346702 | orchestrator | 2026-02-19 03:34:43 | INFO  | It takes a moment until task 7e7a899b-dd5b-4271-8dfb-d74806b40404 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-19 03:34:55.235915 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-19 03:34:55.236018 | orchestrator | 2.16.14 2026-02-19 03:34:55.236032 | orchestrator | 2026-02-19 03:34:55.236042 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-19 03:34:55.236052 | orchestrator | 2026-02-19 03:34:55.236061 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-19 03:34:55.236071 | orchestrator | Thursday 19 February 2026 03:34:47 +0000 (0:00:00.320) 0:00:00.320 ***** 2026-02-19 03:34:55.236080 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 03:34:55.236089 | orchestrator | 2026-02-19 03:34:55.236098 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-19 03:34:55.236106 | orchestrator | Thursday 19 February 2026 03:34:47 +0000 (0:00:00.252) 0:00:00.572 ***** 2026-02-19 03:34:55.236115 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:34:55.236124 | orchestrator | 2026-02-19 03:34:55.236133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236142 | orchestrator | Thursday 19 February 2026 03:34:48 +0000 (0:00:00.243) 0:00:00.815 ***** 2026-02-19 03:34:55.236151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-19 03:34:55.236159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-19 03:34:55.236186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-19 03:34:55.236201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-19 03:34:55.236216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-19 03:34:55.236231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-19 03:34:55.236244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-19 03:34:55.236258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-19 03:34:55.236273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-19 03:34:55.236287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-19 03:34:55.236325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-19 03:34:55.236341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-19 03:34:55.236356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-19 03:34:55.236370 | orchestrator | 2026-02-19 03:34:55.236384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236398 | orchestrator | Thursday 19 February 2026 03:34:48 +0000 (0:00:00.530) 0:00:01.346 ***** 2026-02-19 03:34:55.236413 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.236428 | orchestrator | 2026-02-19 03:34:55.236513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236528 | orchestrator | Thursday 19 February 2026 03:34:48 +0000 (0:00:00.203) 0:00:01.549 ***** 2026-02-19 03:34:55.236543 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.236559 | orchestrator | 2026-02-19 03:34:55.236574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236590 | orchestrator | Thursday 19 February 2026 03:34:49 +0000 (0:00:00.216) 0:00:01.766 ***** 2026-02-19 03:34:55.236607 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.236622 | orchestrator | 2026-02-19 03:34:55.236636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236650 | orchestrator | Thursday 19 February 2026 03:34:49 +0000 (0:00:00.200) 0:00:01.967 ***** 2026-02-19 03:34:55.236661 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.236671 | orchestrator | 2026-02-19 03:34:55.236681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236691 | orchestrator | Thursday 19 February 2026 03:34:49 +0000 (0:00:00.207) 0:00:02.174 ***** 2026-02-19 03:34:55.236700 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.236710 | orchestrator | 2026-02-19 03:34:55.236720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236731 | orchestrator | Thursday 19 February 2026 03:34:49 +0000 (0:00:00.218) 0:00:02.392 ***** 2026-02-19 03:34:55.236740 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.236749 | orchestrator | 2026-02-19 03:34:55.236759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236769 | orchestrator | Thursday 19 February 2026 03:34:49 +0000 (0:00:00.199) 0:00:02.592 ***** 2026-02-19 03:34:55.236779 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.236794 | orchestrator | 2026-02-19 03:34:55.236812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236834 | orchestrator | Thursday 19 February 2026 03:34:50 +0000 (0:00:00.222) 0:00:02.815 ***** 2026-02-19 03:34:55.236847 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.236861 | orchestrator | 2026-02-19 03:34:55.236874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.236886 | orchestrator | Thursday 19 February 2026 03:34:50 +0000 (0:00:00.203) 0:00:03.019 ***** 2026-02-19 03:34:55.236900 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9) 2026-02-19 03:34:55.236916 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9) 2026-02-19 03:34:55.236930 | orchestrator | 2026-02-19 03:34:55.236982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.237021 | orchestrator | Thursday 19 February 2026 03:34:50 +0000 (0:00:00.423) 0:00:03.442 ***** 2026-02-19 03:34:55.237047 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e) 2026-02-19 03:34:55.237062 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e) 2026-02-19 03:34:55.237071 | orchestrator | 2026-02-19 03:34:55.237080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.237102 | orchestrator | Thursday 19 February 2026 03:34:51 +0000 (0:00:00.644) 0:00:04.086 ***** 2026-02-19 03:34:55.237112 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8) 2026-02-19 03:34:55.237120 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8) 2026-02-19 03:34:55.237129 | orchestrator | 2026-02-19 03:34:55.237138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.237146 | orchestrator | Thursday 19 February 2026 03:34:52 +0000 (0:00:00.650) 0:00:04.737 ***** 2026-02-19 03:34:55.237155 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417) 2026-02-19 03:34:55.237163 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417) 2026-02-19 03:34:55.237172 | orchestrator | 2026-02-19 03:34:55.237190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:34:55.237200 | orchestrator | Thursday 19 February 2026 03:34:52 +0000 (0:00:00.866) 0:00:05.603 ***** 2026-02-19 03:34:55.237210 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-19 03:34:55.237225 | orchestrator | 2026-02-19 03:34:55.237246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:34:55.237263 | orchestrator | Thursday 19 February 2026 03:34:53 +0000 (0:00:00.349) 0:00:05.953 ***** 2026-02-19 03:34:55.237277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-19 03:34:55.237291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-19 03:34:55.237305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-19 03:34:55.237319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-19 03:34:55.237333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-19 03:34:55.237348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-19 03:34:55.237363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-19 03:34:55.237378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-19 03:34:55.237393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-19 03:34:55.237407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-19 03:34:55.237421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-19 03:34:55.237430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-19 03:34:55.237501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-19 03:34:55.237510 | orchestrator | 2026-02-19 03:34:55.237519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:34:55.237527 | orchestrator | Thursday 19 February 2026 03:34:53 +0000 (0:00:00.413) 0:00:06.367 ***** 2026-02-19 03:34:55.237536 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.237545 | orchestrator | 2026-02-19 03:34:55.237560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:34:55.237575 | orchestrator | Thursday 19 February 2026 03:34:53 +0000 (0:00:00.210) 0:00:06.577 ***** 2026-02-19 03:34:55.237589 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.237602 | orchestrator | 2026-02-19 03:34:55.237616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:34:55.237629 | orchestrator | Thursday 19 February 2026 03:34:54 +0000 (0:00:00.202) 0:00:06.780 ***** 2026-02-19 03:34:55.237643 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.237668 | orchestrator | 2026-02-19 03:34:55.237683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:34:55.237698 | orchestrator | Thursday 19 February 2026 03:34:54 +0000 (0:00:00.205) 0:00:06.985 ***** 2026-02-19 03:34:55.237712 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.237725 | orchestrator | 2026-02-19 03:34:55.237741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:34:55.237756 | orchestrator | Thursday 19 February 2026 03:34:54 +0000 (0:00:00.216) 0:00:07.202 ***** 2026-02-19 03:34:55.237771 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.237786 | orchestrator | 2026-02-19 03:34:55.237801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:34:55.237815 | orchestrator | Thursday 19 February 2026 03:34:54 +0000 (0:00:00.202) 0:00:07.405 ***** 2026-02-19 03:34:55.237829 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.237838 | orchestrator | 2026-02-19 03:34:55.237847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:34:55.237855 | orchestrator | Thursday 19 February 2026 03:34:55 +0000 (0:00:00.220) 0:00:07.626 ***** 2026-02-19 03:34:55.237864 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:34:55.237872 | orchestrator | 2026-02-19 03:34:55.237892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:03.728381 | orchestrator | Thursday 19 February 2026 03:34:55 +0000 (0:00:00.210) 0:00:07.836 ***** 2026-02-19 03:35:03.728488 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728499 | orchestrator | 2026-02-19 03:35:03.728506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:03.728513 | orchestrator | Thursday 19 February 2026 03:34:55 +0000 (0:00:00.672) 0:00:08.509 ***** 2026-02-19 03:35:03.728519 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-19 03:35:03.728526 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-19 03:35:03.728532 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-19 03:35:03.728538 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-19 03:35:03.728543 | orchestrator | 2026-02-19 03:35:03.728549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:03.728555 | orchestrator | Thursday 19 February 2026 03:34:56 +0000 (0:00:00.741) 0:00:09.250 ***** 2026-02-19 03:35:03.728561 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728567 | orchestrator | 2026-02-19 03:35:03.728573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:03.728578 | orchestrator | Thursday 19 February 2026 03:34:56 +0000 (0:00:00.233) 0:00:09.484 ***** 2026-02-19 03:35:03.728584 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728590 | orchestrator | 2026-02-19 03:35:03.728608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:03.728614 | orchestrator | Thursday 19 February 2026 03:34:57 +0000 (0:00:00.232) 0:00:09.717 ***** 2026-02-19 03:35:03.728620 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728625 | orchestrator | 2026-02-19 03:35:03.728631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:03.728637 | orchestrator | Thursday 19 February 2026 03:34:57 +0000 (0:00:00.202) 0:00:09.920 ***** 2026-02-19 03:35:03.728643 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728648 | orchestrator | 2026-02-19 03:35:03.728654 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-19 03:35:03.728660 | orchestrator | Thursday 19 February 2026 03:34:57 +0000 (0:00:00.200) 0:00:10.121 ***** 2026-02-19 03:35:03.728666 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728671 | orchestrator | 2026-02-19 03:35:03.728677 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-19 03:35:03.728683 | orchestrator | Thursday 19 February 2026 03:34:57 +0000 (0:00:00.124) 0:00:10.245 ***** 2026-02-19 03:35:03.728689 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc132c82-2da4-526a-8d14-ac4e81fe1159'}}) 2026-02-19 03:35:03.728711 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '900578fb-6201-5328-bc2d-5e3d92afe542'}}) 2026-02-19 03:35:03.728717 | orchestrator | 2026-02-19 03:35:03.728722 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-19 03:35:03.728729 | orchestrator | Thursday 19 February 2026 03:34:57 +0000 (0:00:00.189) 0:00:10.435 ***** 2026-02-19 03:35:03.728736 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}) 2026-02-19 03:35:03.728743 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}) 2026-02-19 03:35:03.728749 | orchestrator | 2026-02-19 03:35:03.728755 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-19 03:35:03.728761 | orchestrator | Thursday 19 February 2026 03:34:59 +0000 (0:00:02.070) 0:00:12.505 ***** 2026-02-19 03:35:03.728766 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:03.728773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:03.728779 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728785 | orchestrator | 2026-02-19 03:35:03.728790 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-19 03:35:03.728796 | orchestrator | Thursday 19 February 2026 03:35:00 +0000 (0:00:00.165) 0:00:12.670 ***** 2026-02-19 03:35:03.728802 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}) 2026-02-19 03:35:03.728808 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}) 2026-02-19 03:35:03.728813 | orchestrator | 2026-02-19 03:35:03.728819 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-19 03:35:03.728825 | orchestrator | Thursday 19 February 2026 03:35:01 +0000 (0:00:01.518) 0:00:14.189 ***** 2026-02-19 03:35:03.728830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:03.728836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:03.728842 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728848 | orchestrator | 2026-02-19 03:35:03.728853 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-19 03:35:03.728859 | orchestrator | Thursday 19 February 2026 03:35:01 +0000 (0:00:00.155) 0:00:14.345 ***** 2026-02-19 03:35:03.728877 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728883 | orchestrator | 2026-02-19 03:35:03.728889 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-19 03:35:03.728894 | orchestrator | Thursday 19 February 2026 03:35:02 +0000 (0:00:00.385) 0:00:14.730 ***** 2026-02-19 03:35:03.728900 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:03.728906 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:03.728911 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728917 | orchestrator | 2026-02-19 03:35:03.728923 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-19 03:35:03.728928 | orchestrator | Thursday 19 February 2026 03:35:02 +0000 (0:00:00.156) 0:00:14.886 ***** 2026-02-19 03:35:03.728939 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728946 | orchestrator | 2026-02-19 03:35:03.728953 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-19 03:35:03.728960 | orchestrator | Thursday 19 February 2026 03:35:02 +0000 (0:00:00.144) 0:00:15.031 ***** 2026-02-19 03:35:03.728970 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:03.728977 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:03.728984 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.728990 | orchestrator | 2026-02-19 03:35:03.728997 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-19 03:35:03.729004 | orchestrator | Thursday 19 February 2026 03:35:02 +0000 (0:00:00.171) 0:00:15.202 ***** 2026-02-19 03:35:03.729011 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.729017 | orchestrator | 2026-02-19 03:35:03.729024 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-19 03:35:03.729031 | orchestrator | Thursday 19 February 2026 03:35:02 +0000 (0:00:00.147) 0:00:15.349 ***** 2026-02-19 03:35:03.729037 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:03.729043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:03.729051 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.729057 | orchestrator | 2026-02-19 03:35:03.729064 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-19 03:35:03.729070 | orchestrator | Thursday 19 February 2026 03:35:02 +0000 (0:00:00.159) 0:00:15.508 ***** 2026-02-19 03:35:03.729077 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:35:03.729084 | orchestrator | 2026-02-19 03:35:03.729091 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-19 03:35:03.729097 | orchestrator | Thursday 19 February 2026 03:35:03 +0000 (0:00:00.146) 0:00:15.655 ***** 2026-02-19 03:35:03.729104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:03.729110 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:03.729117 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.729124 | orchestrator | 2026-02-19 03:35:03.729131 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-19 03:35:03.729137 | orchestrator | Thursday 19 February 2026 03:35:03 +0000 (0:00:00.162) 0:00:15.817 ***** 2026-02-19 03:35:03.729144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:03.729150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:03.729157 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.729163 | orchestrator | 2026-02-19 03:35:03.729170 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-19 03:35:03.729177 | orchestrator | Thursday 19 February 2026 03:35:03 +0000 (0:00:00.188) 0:00:16.006 ***** 2026-02-19 03:35:03.729184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:03.729191 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:03.729202 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.729208 | orchestrator | 2026-02-19 03:35:03.729217 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-19 03:35:03.729228 | orchestrator | Thursday 19 February 2026 03:35:03 +0000 (0:00:00.174) 0:00:16.180 ***** 2026-02-19 03:35:03.729237 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:03.729251 | orchestrator | 2026-02-19 03:35:03.729266 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-19 03:35:03.729280 | orchestrator | Thursday 19 February 2026 03:35:03 +0000 (0:00:00.150) 0:00:16.331 ***** 2026-02-19 03:35:10.516124 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.516265 | orchestrator | 2026-02-19 03:35:10.516295 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-19 03:35:10.516352 | orchestrator | Thursday 19 February 2026 03:35:03 +0000 (0:00:00.142) 0:00:16.474 ***** 2026-02-19 03:35:10.516377 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.516397 | orchestrator | 2026-02-19 03:35:10.516417 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-19 03:35:10.516511 | orchestrator | Thursday 19 February 2026 03:35:04 +0000 (0:00:00.360) 0:00:16.835 ***** 2026-02-19 03:35:10.516533 | orchestrator | ok: [testbed-node-3] => { 2026-02-19 03:35:10.516556 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-19 03:35:10.516579 | orchestrator | } 2026-02-19 03:35:10.516635 | orchestrator | 2026-02-19 03:35:10.516657 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-19 03:35:10.516679 | orchestrator | Thursday 19 February 2026 03:35:04 +0000 (0:00:00.154) 0:00:16.989 ***** 2026-02-19 03:35:10.516701 | orchestrator | ok: [testbed-node-3] => { 2026-02-19 03:35:10.516723 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-19 03:35:10.516744 | orchestrator | } 2026-02-19 03:35:10.516765 | orchestrator | 2026-02-19 03:35:10.516787 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-19 03:35:10.516830 | orchestrator | Thursday 19 February 2026 03:35:04 +0000 (0:00:00.168) 0:00:17.158 ***** 2026-02-19 03:35:10.516853 | orchestrator | ok: [testbed-node-3] => { 2026-02-19 03:35:10.516876 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-19 03:35:10.516896 | orchestrator | } 2026-02-19 03:35:10.516914 | orchestrator | 2026-02-19 03:35:10.516933 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-19 03:35:10.516953 | orchestrator | Thursday 19 February 2026 03:35:04 +0000 (0:00:00.150) 0:00:17.308 ***** 2026-02-19 03:35:10.516973 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:35:10.516992 | orchestrator | 2026-02-19 03:35:10.517012 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-19 03:35:10.517032 | orchestrator | Thursday 19 February 2026 03:35:05 +0000 (0:00:00.756) 0:00:18.065 ***** 2026-02-19 03:35:10.517052 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:35:10.517073 | orchestrator | 2026-02-19 03:35:10.517093 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-19 03:35:10.517113 | orchestrator | Thursday 19 February 2026 03:35:06 +0000 (0:00:00.558) 0:00:18.623 ***** 2026-02-19 03:35:10.517133 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:35:10.517151 | orchestrator | 2026-02-19 03:35:10.517171 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-19 03:35:10.517191 | orchestrator | Thursday 19 February 2026 03:35:06 +0000 (0:00:00.540) 0:00:19.164 ***** 2026-02-19 03:35:10.517208 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:35:10.517227 | orchestrator | 2026-02-19 03:35:10.517244 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-19 03:35:10.517262 | orchestrator | Thursday 19 February 2026 03:35:06 +0000 (0:00:00.156) 0:00:19.321 ***** 2026-02-19 03:35:10.517281 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.517299 | orchestrator | 2026-02-19 03:35:10.517318 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-19 03:35:10.517369 | orchestrator | Thursday 19 February 2026 03:35:06 +0000 (0:00:00.136) 0:00:19.457 ***** 2026-02-19 03:35:10.517389 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.517408 | orchestrator | 2026-02-19 03:35:10.517566 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-19 03:35:10.517598 | orchestrator | Thursday 19 February 2026 03:35:06 +0000 (0:00:00.116) 0:00:19.574 ***** 2026-02-19 03:35:10.517619 | orchestrator | ok: [testbed-node-3] => { 2026-02-19 03:35:10.517640 | orchestrator |  "vgs_report": { 2026-02-19 03:35:10.517660 | orchestrator |  "vg": [] 2026-02-19 03:35:10.517681 | orchestrator |  } 2026-02-19 03:35:10.517702 | orchestrator | } 2026-02-19 03:35:10.517720 | orchestrator | 2026-02-19 03:35:10.517739 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-19 03:35:10.517757 | orchestrator | Thursday 19 February 2026 03:35:07 +0000 (0:00:00.151) 0:00:19.725 ***** 2026-02-19 03:35:10.517775 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.517793 | orchestrator | 2026-02-19 03:35:10.517811 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-19 03:35:10.517830 | orchestrator | Thursday 19 February 2026 03:35:07 +0000 (0:00:00.141) 0:00:19.866 ***** 2026-02-19 03:35:10.517848 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.517868 | orchestrator | 2026-02-19 03:35:10.517887 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-19 03:35:10.517907 | orchestrator | Thursday 19 February 2026 03:35:07 +0000 (0:00:00.337) 0:00:20.203 ***** 2026-02-19 03:35:10.517926 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.517946 | orchestrator | 2026-02-19 03:35:10.517958 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-19 03:35:10.517969 | orchestrator | Thursday 19 February 2026 03:35:07 +0000 (0:00:00.135) 0:00:20.339 ***** 2026-02-19 03:35:10.517979 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.517990 | orchestrator | 2026-02-19 03:35:10.518001 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-19 03:35:10.518011 | orchestrator | Thursday 19 February 2026 03:35:07 +0000 (0:00:00.134) 0:00:20.473 ***** 2026-02-19 03:35:10.518089 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518099 | orchestrator | 2026-02-19 03:35:10.518109 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-19 03:35:10.518118 | orchestrator | Thursday 19 February 2026 03:35:07 +0000 (0:00:00.131) 0:00:20.605 ***** 2026-02-19 03:35:10.518127 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518137 | orchestrator | 2026-02-19 03:35:10.518146 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-19 03:35:10.518156 | orchestrator | Thursday 19 February 2026 03:35:08 +0000 (0:00:00.140) 0:00:20.746 ***** 2026-02-19 03:35:10.518166 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518175 | orchestrator | 2026-02-19 03:35:10.518184 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-19 03:35:10.518194 | orchestrator | Thursday 19 February 2026 03:35:08 +0000 (0:00:00.160) 0:00:20.907 ***** 2026-02-19 03:35:10.518229 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518239 | orchestrator | 2026-02-19 03:35:10.518249 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-19 03:35:10.518259 | orchestrator | Thursday 19 February 2026 03:35:08 +0000 (0:00:00.147) 0:00:21.054 ***** 2026-02-19 03:35:10.518268 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518278 | orchestrator | 2026-02-19 03:35:10.518287 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-19 03:35:10.518297 | orchestrator | Thursday 19 February 2026 03:35:08 +0000 (0:00:00.157) 0:00:21.212 ***** 2026-02-19 03:35:10.518306 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518316 | orchestrator | 2026-02-19 03:35:10.518325 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-19 03:35:10.518335 | orchestrator | Thursday 19 February 2026 03:35:08 +0000 (0:00:00.157) 0:00:21.369 ***** 2026-02-19 03:35:10.518360 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518370 | orchestrator | 2026-02-19 03:35:10.518379 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-19 03:35:10.518388 | orchestrator | Thursday 19 February 2026 03:35:08 +0000 (0:00:00.145) 0:00:21.515 ***** 2026-02-19 03:35:10.518398 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518407 | orchestrator | 2026-02-19 03:35:10.518455 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-19 03:35:10.518474 | orchestrator | Thursday 19 February 2026 03:35:09 +0000 (0:00:00.137) 0:00:21.652 ***** 2026-02-19 03:35:10.518487 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518497 | orchestrator | 2026-02-19 03:35:10.518506 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-19 03:35:10.518516 | orchestrator | Thursday 19 February 2026 03:35:09 +0000 (0:00:00.164) 0:00:21.817 ***** 2026-02-19 03:35:10.518525 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518534 | orchestrator | 2026-02-19 03:35:10.518544 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-19 03:35:10.518553 | orchestrator | Thursday 19 February 2026 03:35:09 +0000 (0:00:00.333) 0:00:22.151 ***** 2026-02-19 03:35:10.518564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:10.518575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:10.518585 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518594 | orchestrator | 2026-02-19 03:35:10.518603 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-19 03:35:10.518613 | orchestrator | Thursday 19 February 2026 03:35:09 +0000 (0:00:00.162) 0:00:22.313 ***** 2026-02-19 03:35:10.518622 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:10.518632 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:10.518641 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518651 | orchestrator | 2026-02-19 03:35:10.518660 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-19 03:35:10.518669 | orchestrator | Thursday 19 February 2026 03:35:09 +0000 (0:00:00.155) 0:00:22.469 ***** 2026-02-19 03:35:10.518689 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:10.518699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:10.518709 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518718 | orchestrator | 2026-02-19 03:35:10.518727 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-19 03:35:10.518737 | orchestrator | Thursday 19 February 2026 03:35:10 +0000 (0:00:00.158) 0:00:22.627 ***** 2026-02-19 03:35:10.518746 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:10.518756 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:10.518766 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518775 | orchestrator | 2026-02-19 03:35:10.518784 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-19 03:35:10.518794 | orchestrator | Thursday 19 February 2026 03:35:10 +0000 (0:00:00.157) 0:00:22.785 ***** 2026-02-19 03:35:10.518811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:10.518821 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:10.518830 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:10.518840 | orchestrator | 2026-02-19 03:35:10.518849 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-19 03:35:10.518859 | orchestrator | Thursday 19 February 2026 03:35:10 +0000 (0:00:00.166) 0:00:22.952 ***** 2026-02-19 03:35:10.518884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:16.034250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:16.034354 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:16.034366 | orchestrator | 2026-02-19 03:35:16.034375 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-19 03:35:16.034392 | orchestrator | Thursday 19 February 2026 03:35:10 +0000 (0:00:00.171) 0:00:23.123 ***** 2026-02-19 03:35:16.034400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:16.035167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:16.035248 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:16.035262 | orchestrator | 2026-02-19 03:35:16.035291 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-19 03:35:16.035302 | orchestrator | Thursday 19 February 2026 03:35:10 +0000 (0:00:00.167) 0:00:23.291 ***** 2026-02-19 03:35:16.035308 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:16.035313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:16.035319 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:16.035324 | orchestrator | 2026-02-19 03:35:16.035330 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-19 03:35:16.035335 | orchestrator | Thursday 19 February 2026 03:35:10 +0000 (0:00:00.175) 0:00:23.466 ***** 2026-02-19 03:35:16.035340 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:35:16.035346 | orchestrator | 2026-02-19 03:35:16.035352 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-19 03:35:16.035357 | orchestrator | Thursday 19 February 2026 03:35:11 +0000 (0:00:00.579) 0:00:24.045 ***** 2026-02-19 03:35:16.035362 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:35:16.035367 | orchestrator | 2026-02-19 03:35:16.035373 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-19 03:35:16.035378 | orchestrator | Thursday 19 February 2026 03:35:11 +0000 (0:00:00.562) 0:00:24.608 ***** 2026-02-19 03:35:16.035383 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:35:16.035388 | orchestrator | 2026-02-19 03:35:16.035393 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-19 03:35:16.035399 | orchestrator | Thursday 19 February 2026 03:35:12 +0000 (0:00:00.162) 0:00:24.770 ***** 2026-02-19 03:35:16.035405 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'vg_name': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}) 2026-02-19 03:35:16.035413 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'vg_name': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}) 2026-02-19 03:35:16.035467 | orchestrator | 2026-02-19 03:35:16.035480 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-19 03:35:16.035490 | orchestrator | Thursday 19 February 2026 03:35:12 +0000 (0:00:00.191) 0:00:24.962 ***** 2026-02-19 03:35:16.035499 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:16.035509 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:16.035518 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:16.035528 | orchestrator | 2026-02-19 03:35:16.035539 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-19 03:35:16.035544 | orchestrator | Thursday 19 February 2026 03:35:12 +0000 (0:00:00.388) 0:00:25.350 ***** 2026-02-19 03:35:16.035550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:16.035555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:16.035560 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:16.035565 | orchestrator | 2026-02-19 03:35:16.035571 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-19 03:35:16.035576 | orchestrator | Thursday 19 February 2026 03:35:12 +0000 (0:00:00.173) 0:00:25.524 ***** 2026-02-19 03:35:16.035581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 03:35:16.035586 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 03:35:16.035591 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:35:16.035596 | orchestrator | 2026-02-19 03:35:16.035602 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-19 03:35:16.035607 | orchestrator | Thursday 19 February 2026 03:35:13 +0000 (0:00:00.169) 0:00:25.693 ***** 2026-02-19 03:35:16.035631 | orchestrator | ok: [testbed-node-3] => { 2026-02-19 03:35:16.035637 | orchestrator |  "lvm_report": { 2026-02-19 03:35:16.035642 | orchestrator |  "lv": [ 2026-02-19 03:35:16.035648 | orchestrator |  { 2026-02-19 03:35:16.035653 | orchestrator |  "lv_name": "osd-block-900578fb-6201-5328-bc2d-5e3d92afe542", 2026-02-19 03:35:16.035659 | orchestrator |  "vg_name": "ceph-900578fb-6201-5328-bc2d-5e3d92afe542" 2026-02-19 03:35:16.035664 | orchestrator |  }, 2026-02-19 03:35:16.035669 | orchestrator |  { 2026-02-19 03:35:16.035675 | orchestrator |  "lv_name": "osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159", 2026-02-19 03:35:16.035680 | orchestrator |  "vg_name": "ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159" 2026-02-19 03:35:16.035685 | orchestrator |  } 2026-02-19 03:35:16.035690 | orchestrator |  ], 2026-02-19 03:35:16.035695 | orchestrator |  "pv": [ 2026-02-19 03:35:16.035700 | orchestrator |  { 2026-02-19 03:35:16.035705 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-19 03:35:16.035710 | orchestrator |  "vg_name": "ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159" 2026-02-19 03:35:16.035715 | orchestrator |  }, 2026-02-19 03:35:16.035720 | orchestrator |  { 2026-02-19 03:35:16.035742 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-19 03:35:16.035748 | orchestrator |  "vg_name": "ceph-900578fb-6201-5328-bc2d-5e3d92afe542" 2026-02-19 03:35:16.035753 | orchestrator |  } 2026-02-19 03:35:16.035758 | orchestrator |  ] 2026-02-19 03:35:16.035763 | orchestrator |  } 2026-02-19 03:35:16.035769 | orchestrator | } 2026-02-19 03:35:16.035806 | orchestrator | 2026-02-19 03:35:16.035812 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-19 03:35:16.035817 | orchestrator | 2026-02-19 03:35:16.035823 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-19 03:35:16.035828 | orchestrator | Thursday 19 February 2026 03:35:13 +0000 (0:00:00.310) 0:00:26.004 ***** 2026-02-19 03:35:16.035834 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-19 03:35:16.035839 | orchestrator | 2026-02-19 03:35:16.035844 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-19 03:35:16.035850 | orchestrator | Thursday 19 February 2026 03:35:13 +0000 (0:00:00.258) 0:00:26.262 ***** 2026-02-19 03:35:16.035855 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:16.035860 | orchestrator | 2026-02-19 03:35:16.035865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:16.035870 | orchestrator | Thursday 19 February 2026 03:35:13 +0000 (0:00:00.240) 0:00:26.502 ***** 2026-02-19 03:35:16.035875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-19 03:35:16.035880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-19 03:35:16.035886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-19 03:35:16.035891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-19 03:35:16.035896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-19 03:35:16.035901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-19 03:35:16.035907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-19 03:35:16.035912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-19 03:35:16.035917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-19 03:35:16.035922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-19 03:35:16.035927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-19 03:35:16.035932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-19 03:35:16.035937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-19 03:35:16.035942 | orchestrator | 2026-02-19 03:35:16.035947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:16.035952 | orchestrator | Thursday 19 February 2026 03:35:14 +0000 (0:00:00.417) 0:00:26.919 ***** 2026-02-19 03:35:16.035958 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:16.035963 | orchestrator | 2026-02-19 03:35:16.035968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:16.035973 | orchestrator | Thursday 19 February 2026 03:35:14 +0000 (0:00:00.211) 0:00:27.131 ***** 2026-02-19 03:35:16.035978 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:16.035983 | orchestrator | 2026-02-19 03:35:16.035988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:16.035993 | orchestrator | Thursday 19 February 2026 03:35:15 +0000 (0:00:00.636) 0:00:27.767 ***** 2026-02-19 03:35:16.035998 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:16.036004 | orchestrator | 2026-02-19 03:35:16.036009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:16.036014 | orchestrator | Thursday 19 February 2026 03:35:15 +0000 (0:00:00.225) 0:00:27.993 ***** 2026-02-19 03:35:16.036019 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:16.036025 | orchestrator | 2026-02-19 03:35:16.036030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:16.036036 | orchestrator | Thursday 19 February 2026 03:35:15 +0000 (0:00:00.218) 0:00:28.212 ***** 2026-02-19 03:35:16.036046 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:16.036052 | orchestrator | 2026-02-19 03:35:16.036057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:16.036062 | orchestrator | Thursday 19 February 2026 03:35:15 +0000 (0:00:00.207) 0:00:28.419 ***** 2026-02-19 03:35:16.036068 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:16.036073 | orchestrator | 2026-02-19 03:35:16.036083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:27.585235 | orchestrator | Thursday 19 February 2026 03:35:16 +0000 (0:00:00.219) 0:00:28.639 ***** 2026-02-19 03:35:27.585339 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.585354 | orchestrator | 2026-02-19 03:35:27.585365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:27.585375 | orchestrator | Thursday 19 February 2026 03:35:16 +0000 (0:00:00.217) 0:00:28.857 ***** 2026-02-19 03:35:27.585386 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.585395 | orchestrator | 2026-02-19 03:35:27.585405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:27.585415 | orchestrator | Thursday 19 February 2026 03:35:16 +0000 (0:00:00.243) 0:00:29.101 ***** 2026-02-19 03:35:27.585487 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec) 2026-02-19 03:35:27.585499 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec) 2026-02-19 03:35:27.585520 | orchestrator | 2026-02-19 03:35:27.585546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:27.585556 | orchestrator | Thursday 19 February 2026 03:35:16 +0000 (0:00:00.437) 0:00:29.539 ***** 2026-02-19 03:35:27.585566 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262) 2026-02-19 03:35:27.585576 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262) 2026-02-19 03:35:27.585586 | orchestrator | 2026-02-19 03:35:27.585595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:27.585605 | orchestrator | Thursday 19 February 2026 03:35:17 +0000 (0:00:00.445) 0:00:29.985 ***** 2026-02-19 03:35:27.585615 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0) 2026-02-19 03:35:27.585624 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0) 2026-02-19 03:35:27.585639 | orchestrator | 2026-02-19 03:35:27.585657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:27.585677 | orchestrator | Thursday 19 February 2026 03:35:18 +0000 (0:00:00.684) 0:00:30.669 ***** 2026-02-19 03:35:27.585701 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58) 2026-02-19 03:35:27.585717 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58) 2026-02-19 03:35:27.585734 | orchestrator | 2026-02-19 03:35:27.585749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:27.585766 | orchestrator | Thursday 19 February 2026 03:35:19 +0000 (0:00:00.948) 0:00:31.618 ***** 2026-02-19 03:35:27.585783 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-19 03:35:27.585800 | orchestrator | 2026-02-19 03:35:27.585818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.585837 | orchestrator | Thursday 19 February 2026 03:35:19 +0000 (0:00:00.350) 0:00:31.968 ***** 2026-02-19 03:35:27.585855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-19 03:35:27.585875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-19 03:35:27.585893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-19 03:35:27.585934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-19 03:35:27.585946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-19 03:35:27.585958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-19 03:35:27.585969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-19 03:35:27.585980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-19 03:35:27.585991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-19 03:35:27.586002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-19 03:35:27.586014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-19 03:35:27.586165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-19 03:35:27.586183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-19 03:35:27.586200 | orchestrator | 2026-02-19 03:35:27.586217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586233 | orchestrator | Thursday 19 February 2026 03:35:19 +0000 (0:00:00.417) 0:00:32.386 ***** 2026-02-19 03:35:27.586250 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586263 | orchestrator | 2026-02-19 03:35:27.586272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586282 | orchestrator | Thursday 19 February 2026 03:35:19 +0000 (0:00:00.223) 0:00:32.609 ***** 2026-02-19 03:35:27.586292 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586301 | orchestrator | 2026-02-19 03:35:27.586311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586320 | orchestrator | Thursday 19 February 2026 03:35:20 +0000 (0:00:00.228) 0:00:32.838 ***** 2026-02-19 03:35:27.586330 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586340 | orchestrator | 2026-02-19 03:35:27.586370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586381 | orchestrator | Thursday 19 February 2026 03:35:20 +0000 (0:00:00.211) 0:00:33.049 ***** 2026-02-19 03:35:27.586391 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586400 | orchestrator | 2026-02-19 03:35:27.586410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586448 | orchestrator | Thursday 19 February 2026 03:35:20 +0000 (0:00:00.229) 0:00:33.278 ***** 2026-02-19 03:35:27.586465 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586480 | orchestrator | 2026-02-19 03:35:27.586505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586524 | orchestrator | Thursday 19 February 2026 03:35:20 +0000 (0:00:00.202) 0:00:33.481 ***** 2026-02-19 03:35:27.586540 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586555 | orchestrator | 2026-02-19 03:35:27.586571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586586 | orchestrator | Thursday 19 February 2026 03:35:21 +0000 (0:00:00.212) 0:00:33.693 ***** 2026-02-19 03:35:27.586612 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586629 | orchestrator | 2026-02-19 03:35:27.586646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586663 | orchestrator | Thursday 19 February 2026 03:35:21 +0000 (0:00:00.224) 0:00:33.918 ***** 2026-02-19 03:35:27.586679 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586695 | orchestrator | 2026-02-19 03:35:27.586711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586727 | orchestrator | Thursday 19 February 2026 03:35:21 +0000 (0:00:00.655) 0:00:34.574 ***** 2026-02-19 03:35:27.586743 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-19 03:35:27.586774 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-19 03:35:27.586791 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-19 03:35:27.586807 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-19 03:35:27.586823 | orchestrator | 2026-02-19 03:35:27.586840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586856 | orchestrator | Thursday 19 February 2026 03:35:22 +0000 (0:00:00.664) 0:00:35.239 ***** 2026-02-19 03:35:27.586873 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586890 | orchestrator | 2026-02-19 03:35:27.586905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586921 | orchestrator | Thursday 19 February 2026 03:35:22 +0000 (0:00:00.212) 0:00:35.451 ***** 2026-02-19 03:35:27.586938 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.586955 | orchestrator | 2026-02-19 03:35:27.586972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.586989 | orchestrator | Thursday 19 February 2026 03:35:23 +0000 (0:00:00.220) 0:00:35.671 ***** 2026-02-19 03:35:27.587003 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.587013 | orchestrator | 2026-02-19 03:35:27.587023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:27.587033 | orchestrator | Thursday 19 February 2026 03:35:23 +0000 (0:00:00.224) 0:00:35.896 ***** 2026-02-19 03:35:27.587042 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.587051 | orchestrator | 2026-02-19 03:35:27.587061 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-19 03:35:27.587071 | orchestrator | Thursday 19 February 2026 03:35:23 +0000 (0:00:00.234) 0:00:36.130 ***** 2026-02-19 03:35:27.587080 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.587089 | orchestrator | 2026-02-19 03:35:27.587099 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-19 03:35:27.587108 | orchestrator | Thursday 19 February 2026 03:35:23 +0000 (0:00:00.150) 0:00:36.281 ***** 2026-02-19 03:35:27.587118 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}}) 2026-02-19 03:35:27.587128 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}}) 2026-02-19 03:35:27.587137 | orchestrator | 2026-02-19 03:35:27.587147 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-19 03:35:27.587156 | orchestrator | Thursday 19 February 2026 03:35:23 +0000 (0:00:00.224) 0:00:36.506 ***** 2026-02-19 03:35:27.587167 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}) 2026-02-19 03:35:27.587178 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}) 2026-02-19 03:35:27.587193 | orchestrator | 2026-02-19 03:35:27.587209 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-19 03:35:27.587225 | orchestrator | Thursday 19 February 2026 03:35:25 +0000 (0:00:02.027) 0:00:38.533 ***** 2026-02-19 03:35:27.587241 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:27.587258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:27.587276 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:27.587293 | orchestrator | 2026-02-19 03:35:27.587310 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-19 03:35:27.587326 | orchestrator | Thursday 19 February 2026 03:35:26 +0000 (0:00:00.166) 0:00:38.700 ***** 2026-02-19 03:35:27.587343 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}) 2026-02-19 03:35:27.587383 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}) 2026-02-19 03:35:33.166865 | orchestrator | 2026-02-19 03:35:33.166952 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-19 03:35:33.166963 | orchestrator | Thursday 19 February 2026 03:35:27 +0000 (0:00:01.484) 0:00:40.184 ***** 2026-02-19 03:35:33.166971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:33.166980 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:33.166987 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.166995 | orchestrator | 2026-02-19 03:35:33.167016 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-19 03:35:33.167023 | orchestrator | Thursday 19 February 2026 03:35:27 +0000 (0:00:00.370) 0:00:40.555 ***** 2026-02-19 03:35:33.167030 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167037 | orchestrator | 2026-02-19 03:35:33.167043 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-19 03:35:33.167050 | orchestrator | Thursday 19 February 2026 03:35:28 +0000 (0:00:00.132) 0:00:40.688 ***** 2026-02-19 03:35:33.167057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:33.167063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:33.167085 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167098 | orchestrator | 2026-02-19 03:35:33.167105 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-19 03:35:33.167112 | orchestrator | Thursday 19 February 2026 03:35:28 +0000 (0:00:00.139) 0:00:40.828 ***** 2026-02-19 03:35:33.167119 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167125 | orchestrator | 2026-02-19 03:35:33.167132 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-19 03:35:33.167139 | orchestrator | Thursday 19 February 2026 03:35:28 +0000 (0:00:00.142) 0:00:40.970 ***** 2026-02-19 03:35:33.167145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:33.167152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:33.167158 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167166 | orchestrator | 2026-02-19 03:35:33.167173 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-19 03:35:33.167180 | orchestrator | Thursday 19 February 2026 03:35:28 +0000 (0:00:00.155) 0:00:41.125 ***** 2026-02-19 03:35:33.167192 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167205 | orchestrator | 2026-02-19 03:35:33.167220 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-19 03:35:33.167231 | orchestrator | Thursday 19 February 2026 03:35:28 +0000 (0:00:00.137) 0:00:41.263 ***** 2026-02-19 03:35:33.167242 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:33.167253 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:33.167265 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167275 | orchestrator | 2026-02-19 03:35:33.167287 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-19 03:35:33.167319 | orchestrator | Thursday 19 February 2026 03:35:28 +0000 (0:00:00.147) 0:00:41.411 ***** 2026-02-19 03:35:33.167326 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:33.167334 | orchestrator | 2026-02-19 03:35:33.167340 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-19 03:35:33.167347 | orchestrator | Thursday 19 February 2026 03:35:28 +0000 (0:00:00.136) 0:00:41.547 ***** 2026-02-19 03:35:33.167354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:33.167360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:33.167367 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167374 | orchestrator | 2026-02-19 03:35:33.167380 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-19 03:35:33.167387 | orchestrator | Thursday 19 February 2026 03:35:29 +0000 (0:00:00.152) 0:00:41.700 ***** 2026-02-19 03:35:33.167394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:33.167400 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:33.167407 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167414 | orchestrator | 2026-02-19 03:35:33.167467 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-19 03:35:33.167489 | orchestrator | Thursday 19 February 2026 03:35:29 +0000 (0:00:00.133) 0:00:41.833 ***** 2026-02-19 03:35:33.167498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:33.167506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:33.167514 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167522 | orchestrator | 2026-02-19 03:35:33.167529 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-19 03:35:33.167537 | orchestrator | Thursday 19 February 2026 03:35:29 +0000 (0:00:00.162) 0:00:41.996 ***** 2026-02-19 03:35:33.167550 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167558 | orchestrator | 2026-02-19 03:35:33.167566 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-19 03:35:33.167573 | orchestrator | Thursday 19 February 2026 03:35:29 +0000 (0:00:00.266) 0:00:42.263 ***** 2026-02-19 03:35:33.167581 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167588 | orchestrator | 2026-02-19 03:35:33.167596 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-19 03:35:33.167603 | orchestrator | Thursday 19 February 2026 03:35:29 +0000 (0:00:00.133) 0:00:42.397 ***** 2026-02-19 03:35:33.167611 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167618 | orchestrator | 2026-02-19 03:35:33.167626 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-19 03:35:33.167638 | orchestrator | Thursday 19 February 2026 03:35:29 +0000 (0:00:00.128) 0:00:42.526 ***** 2026-02-19 03:35:33.167655 | orchestrator | ok: [testbed-node-4] => { 2026-02-19 03:35:33.167669 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-19 03:35:33.167681 | orchestrator | } 2026-02-19 03:35:33.167693 | orchestrator | 2026-02-19 03:35:33.167704 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-19 03:35:33.167715 | orchestrator | Thursday 19 February 2026 03:35:30 +0000 (0:00:00.138) 0:00:42.664 ***** 2026-02-19 03:35:33.167726 | orchestrator | ok: [testbed-node-4] => { 2026-02-19 03:35:33.167738 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-19 03:35:33.167761 | orchestrator | } 2026-02-19 03:35:33.167773 | orchestrator | 2026-02-19 03:35:33.167784 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-19 03:35:33.167791 | orchestrator | Thursday 19 February 2026 03:35:30 +0000 (0:00:00.153) 0:00:42.818 ***** 2026-02-19 03:35:33.167797 | orchestrator | ok: [testbed-node-4] => { 2026-02-19 03:35:33.167804 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-19 03:35:33.167811 | orchestrator | } 2026-02-19 03:35:33.167817 | orchestrator | 2026-02-19 03:35:33.167824 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-19 03:35:33.167831 | orchestrator | Thursday 19 February 2026 03:35:30 +0000 (0:00:00.150) 0:00:42.969 ***** 2026-02-19 03:35:33.167837 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:33.167844 | orchestrator | 2026-02-19 03:35:33.167850 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-19 03:35:33.167857 | orchestrator | Thursday 19 February 2026 03:35:30 +0000 (0:00:00.532) 0:00:43.502 ***** 2026-02-19 03:35:33.167863 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:33.167870 | orchestrator | 2026-02-19 03:35:33.167877 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-19 03:35:33.167883 | orchestrator | Thursday 19 February 2026 03:35:31 +0000 (0:00:00.571) 0:00:44.074 ***** 2026-02-19 03:35:33.167890 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:33.167896 | orchestrator | 2026-02-19 03:35:33.167903 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-19 03:35:33.167909 | orchestrator | Thursday 19 February 2026 03:35:31 +0000 (0:00:00.535) 0:00:44.610 ***** 2026-02-19 03:35:33.167916 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:33.167922 | orchestrator | 2026-02-19 03:35:33.167929 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-19 03:35:33.167936 | orchestrator | Thursday 19 February 2026 03:35:32 +0000 (0:00:00.138) 0:00:44.748 ***** 2026-02-19 03:35:33.167942 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167949 | orchestrator | 2026-02-19 03:35:33.167955 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-19 03:35:33.167962 | orchestrator | Thursday 19 February 2026 03:35:32 +0000 (0:00:00.091) 0:00:44.840 ***** 2026-02-19 03:35:33.167969 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.167975 | orchestrator | 2026-02-19 03:35:33.167982 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-19 03:35:33.167989 | orchestrator | Thursday 19 February 2026 03:35:32 +0000 (0:00:00.229) 0:00:45.069 ***** 2026-02-19 03:35:33.167995 | orchestrator | ok: [testbed-node-4] => { 2026-02-19 03:35:33.168002 | orchestrator |  "vgs_report": { 2026-02-19 03:35:33.168009 | orchestrator |  "vg": [] 2026-02-19 03:35:33.168015 | orchestrator |  } 2026-02-19 03:35:33.168022 | orchestrator | } 2026-02-19 03:35:33.168029 | orchestrator | 2026-02-19 03:35:33.168035 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-19 03:35:33.168042 | orchestrator | Thursday 19 February 2026 03:35:32 +0000 (0:00:00.139) 0:00:45.209 ***** 2026-02-19 03:35:33.168048 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.168055 | orchestrator | 2026-02-19 03:35:33.168062 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-19 03:35:33.168068 | orchestrator | Thursday 19 February 2026 03:35:32 +0000 (0:00:00.146) 0:00:45.356 ***** 2026-02-19 03:35:33.168075 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.168081 | orchestrator | 2026-02-19 03:35:33.168088 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-19 03:35:33.168094 | orchestrator | Thursday 19 February 2026 03:35:32 +0000 (0:00:00.130) 0:00:45.486 ***** 2026-02-19 03:35:33.168101 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.168107 | orchestrator | 2026-02-19 03:35:33.168114 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-19 03:35:33.168121 | orchestrator | Thursday 19 February 2026 03:35:33 +0000 (0:00:00.141) 0:00:45.628 ***** 2026-02-19 03:35:33.168133 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:33.168139 | orchestrator | 2026-02-19 03:35:33.168152 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-19 03:35:38.123193 | orchestrator | Thursday 19 February 2026 03:35:33 +0000 (0:00:00.144) 0:00:45.772 ***** 2026-02-19 03:35:38.123284 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123295 | orchestrator | 2026-02-19 03:35:38.123302 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-19 03:35:38.123310 | orchestrator | Thursday 19 February 2026 03:35:33 +0000 (0:00:00.137) 0:00:45.909 ***** 2026-02-19 03:35:38.123317 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123323 | orchestrator | 2026-02-19 03:35:38.123330 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-19 03:35:38.123337 | orchestrator | Thursday 19 February 2026 03:35:33 +0000 (0:00:00.149) 0:00:46.058 ***** 2026-02-19 03:35:38.123343 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123350 | orchestrator | 2026-02-19 03:35:38.123373 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-19 03:35:38.123380 | orchestrator | Thursday 19 February 2026 03:35:33 +0000 (0:00:00.124) 0:00:46.183 ***** 2026-02-19 03:35:38.123387 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123393 | orchestrator | 2026-02-19 03:35:38.123400 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-19 03:35:38.123406 | orchestrator | Thursday 19 February 2026 03:35:33 +0000 (0:00:00.121) 0:00:46.304 ***** 2026-02-19 03:35:38.123429 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123434 | orchestrator | 2026-02-19 03:35:38.123438 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-19 03:35:38.123442 | orchestrator | Thursday 19 February 2026 03:35:33 +0000 (0:00:00.133) 0:00:46.438 ***** 2026-02-19 03:35:38.123446 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123450 | orchestrator | 2026-02-19 03:35:38.123453 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-19 03:35:38.123458 | orchestrator | Thursday 19 February 2026 03:35:34 +0000 (0:00:00.330) 0:00:46.768 ***** 2026-02-19 03:35:38.123462 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123465 | orchestrator | 2026-02-19 03:35:38.123469 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-19 03:35:38.123473 | orchestrator | Thursday 19 February 2026 03:35:34 +0000 (0:00:00.145) 0:00:46.913 ***** 2026-02-19 03:35:38.123477 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123480 | orchestrator | 2026-02-19 03:35:38.123484 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-19 03:35:38.123488 | orchestrator | Thursday 19 February 2026 03:35:34 +0000 (0:00:00.148) 0:00:47.061 ***** 2026-02-19 03:35:38.123491 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123495 | orchestrator | 2026-02-19 03:35:38.123499 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-19 03:35:38.123503 | orchestrator | Thursday 19 February 2026 03:35:34 +0000 (0:00:00.180) 0:00:47.242 ***** 2026-02-19 03:35:38.123508 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123514 | orchestrator | 2026-02-19 03:35:38.123520 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-19 03:35:38.123526 | orchestrator | Thursday 19 February 2026 03:35:34 +0000 (0:00:00.144) 0:00:47.386 ***** 2026-02-19 03:35:38.123534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123548 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123570 | orchestrator | 2026-02-19 03:35:38.123582 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-19 03:35:38.123601 | orchestrator | Thursday 19 February 2026 03:35:34 +0000 (0:00:00.160) 0:00:47.547 ***** 2026-02-19 03:35:38.123605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123613 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123616 | orchestrator | 2026-02-19 03:35:38.123620 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-19 03:35:38.123624 | orchestrator | Thursday 19 February 2026 03:35:35 +0000 (0:00:00.161) 0:00:47.709 ***** 2026-02-19 03:35:38.123627 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123635 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123639 | orchestrator | 2026-02-19 03:35:38.123643 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-19 03:35:38.123646 | orchestrator | Thursday 19 February 2026 03:35:35 +0000 (0:00:00.154) 0:00:47.864 ***** 2026-02-19 03:35:38.123650 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123654 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123658 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123661 | orchestrator | 2026-02-19 03:35:38.123678 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-19 03:35:38.123682 | orchestrator | Thursday 19 February 2026 03:35:35 +0000 (0:00:00.166) 0:00:48.030 ***** 2026-02-19 03:35:38.123686 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123693 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123697 | orchestrator | 2026-02-19 03:35:38.123706 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-19 03:35:38.123710 | orchestrator | Thursday 19 February 2026 03:35:35 +0000 (0:00:00.153) 0:00:48.184 ***** 2026-02-19 03:35:38.123714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123717 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123721 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123725 | orchestrator | 2026-02-19 03:35:38.123728 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-19 03:35:38.123732 | orchestrator | Thursday 19 February 2026 03:35:35 +0000 (0:00:00.159) 0:00:48.344 ***** 2026-02-19 03:35:38.123736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123744 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123753 | orchestrator | 2026-02-19 03:35:38.123757 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-19 03:35:38.123762 | orchestrator | Thursday 19 February 2026 03:35:36 +0000 (0:00:00.389) 0:00:48.733 ***** 2026-02-19 03:35:38.123766 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123775 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123779 | orchestrator | 2026-02-19 03:35:38.123783 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-19 03:35:38.123787 | orchestrator | Thursday 19 February 2026 03:35:36 +0000 (0:00:00.184) 0:00:48.917 ***** 2026-02-19 03:35:38.123792 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:38.123797 | orchestrator | 2026-02-19 03:35:38.123801 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-19 03:35:38.123806 | orchestrator | Thursday 19 February 2026 03:35:36 +0000 (0:00:00.560) 0:00:49.478 ***** 2026-02-19 03:35:38.123810 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:38.123814 | orchestrator | 2026-02-19 03:35:38.123819 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-19 03:35:38.123823 | orchestrator | Thursday 19 February 2026 03:35:37 +0000 (0:00:00.615) 0:00:50.094 ***** 2026-02-19 03:35:38.123827 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:35:38.123832 | orchestrator | 2026-02-19 03:35:38.123836 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-19 03:35:38.123840 | orchestrator | Thursday 19 February 2026 03:35:37 +0000 (0:00:00.148) 0:00:50.242 ***** 2026-02-19 03:35:38.123845 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'vg_name': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}) 2026-02-19 03:35:38.123850 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'vg_name': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}) 2026-02-19 03:35:38.123855 | orchestrator | 2026-02-19 03:35:38.123859 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-19 03:35:38.123863 | orchestrator | Thursday 19 February 2026 03:35:37 +0000 (0:00:00.185) 0:00:50.428 ***** 2026-02-19 03:35:38.123868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123872 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:38.123876 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:38.123881 | orchestrator | 2026-02-19 03:35:38.123885 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-19 03:35:38.123889 | orchestrator | Thursday 19 February 2026 03:35:37 +0000 (0:00:00.140) 0:00:50.568 ***** 2026-02-19 03:35:38.123894 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:38.123901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:43.978937 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:43.979041 | orchestrator | 2026-02-19 03:35:43.979059 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-19 03:35:43.979072 | orchestrator | Thursday 19 February 2026 03:35:38 +0000 (0:00:00.162) 0:00:50.731 ***** 2026-02-19 03:35:43.979083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 03:35:43.979134 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 03:35:43.979146 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:35:43.979157 | orchestrator | 2026-02-19 03:35:43.979168 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-19 03:35:43.979179 | orchestrator | Thursday 19 February 2026 03:35:38 +0000 (0:00:00.155) 0:00:50.887 ***** 2026-02-19 03:35:43.979189 | orchestrator | ok: [testbed-node-4] => { 2026-02-19 03:35:43.979200 | orchestrator |  "lvm_report": { 2026-02-19 03:35:43.979212 | orchestrator |  "lv": [ 2026-02-19 03:35:43.979223 | orchestrator |  { 2026-02-19 03:35:43.979234 | orchestrator |  "lv_name": "osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02", 2026-02-19 03:35:43.979245 | orchestrator |  "vg_name": "ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02" 2026-02-19 03:35:43.979256 | orchestrator |  }, 2026-02-19 03:35:43.979266 | orchestrator |  { 2026-02-19 03:35:43.979277 | orchestrator |  "lv_name": "osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160", 2026-02-19 03:35:43.979287 | orchestrator |  "vg_name": "ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160" 2026-02-19 03:35:43.979298 | orchestrator |  } 2026-02-19 03:35:43.979308 | orchestrator |  ], 2026-02-19 03:35:43.979319 | orchestrator |  "pv": [ 2026-02-19 03:35:43.979329 | orchestrator |  { 2026-02-19 03:35:43.979340 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-19 03:35:43.979350 | orchestrator |  "vg_name": "ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02" 2026-02-19 03:35:43.979362 | orchestrator |  }, 2026-02-19 03:35:43.979373 | orchestrator |  { 2026-02-19 03:35:43.979384 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-19 03:35:43.979394 | orchestrator |  "vg_name": "ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160" 2026-02-19 03:35:43.979405 | orchestrator |  } 2026-02-19 03:35:43.979449 | orchestrator |  ] 2026-02-19 03:35:43.979468 | orchestrator |  } 2026-02-19 03:35:43.979488 | orchestrator | } 2026-02-19 03:35:43.979508 | orchestrator | 2026-02-19 03:35:43.979529 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-19 03:35:43.979548 | orchestrator | 2026-02-19 03:35:43.979569 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-19 03:35:43.979589 | orchestrator | Thursday 19 February 2026 03:35:38 +0000 (0:00:00.262) 0:00:51.150 ***** 2026-02-19 03:35:43.979608 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-19 03:35:43.979620 | orchestrator | 2026-02-19 03:35:43.979632 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-19 03:35:43.979644 | orchestrator | Thursday 19 February 2026 03:35:39 +0000 (0:00:00.543) 0:00:51.694 ***** 2026-02-19 03:35:43.979656 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:35:43.979668 | orchestrator | 2026-02-19 03:35:43.979682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.979694 | orchestrator | Thursday 19 February 2026 03:35:39 +0000 (0:00:00.216) 0:00:51.910 ***** 2026-02-19 03:35:43.979707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-19 03:35:43.979717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-19 03:35:43.979728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-19 03:35:43.979739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-19 03:35:43.979749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-19 03:35:43.979759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-19 03:35:43.979770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-19 03:35:43.979790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-19 03:35:43.979801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-19 03:35:43.979812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-19 03:35:43.979822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-19 03:35:43.979833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-19 03:35:43.979843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-19 03:35:43.979854 | orchestrator | 2026-02-19 03:35:43.979864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.979875 | orchestrator | Thursday 19 February 2026 03:35:39 +0000 (0:00:00.380) 0:00:52.291 ***** 2026-02-19 03:35:43.979885 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:43.979896 | orchestrator | 2026-02-19 03:35:43.979906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.979917 | orchestrator | Thursday 19 February 2026 03:35:39 +0000 (0:00:00.196) 0:00:52.488 ***** 2026-02-19 03:35:43.979928 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:43.979938 | orchestrator | 2026-02-19 03:35:43.979949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.979979 | orchestrator | Thursday 19 February 2026 03:35:40 +0000 (0:00:00.228) 0:00:52.716 ***** 2026-02-19 03:35:43.979991 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:43.980001 | orchestrator | 2026-02-19 03:35:43.980012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980023 | orchestrator | Thursday 19 February 2026 03:35:40 +0000 (0:00:00.186) 0:00:52.903 ***** 2026-02-19 03:35:43.980033 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:43.980043 | orchestrator | 2026-02-19 03:35:43.980054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980065 | orchestrator | Thursday 19 February 2026 03:35:40 +0000 (0:00:00.188) 0:00:53.091 ***** 2026-02-19 03:35:43.980076 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:43.980087 | orchestrator | 2026-02-19 03:35:43.980097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980108 | orchestrator | Thursday 19 February 2026 03:35:40 +0000 (0:00:00.189) 0:00:53.281 ***** 2026-02-19 03:35:43.980119 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:43.980129 | orchestrator | 2026-02-19 03:35:43.980139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980150 | orchestrator | Thursday 19 February 2026 03:35:40 +0000 (0:00:00.186) 0:00:53.467 ***** 2026-02-19 03:35:43.980160 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:43.980171 | orchestrator | 2026-02-19 03:35:43.980181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980192 | orchestrator | Thursday 19 February 2026 03:35:41 +0000 (0:00:00.173) 0:00:53.641 ***** 2026-02-19 03:35:43.980202 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:43.980212 | orchestrator | 2026-02-19 03:35:43.980223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980238 | orchestrator | Thursday 19 February 2026 03:35:41 +0000 (0:00:00.503) 0:00:54.144 ***** 2026-02-19 03:35:43.980256 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf) 2026-02-19 03:35:43.980267 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf) 2026-02-19 03:35:43.980278 | orchestrator | 2026-02-19 03:35:43.980289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980299 | orchestrator | Thursday 19 February 2026 03:35:41 +0000 (0:00:00.401) 0:00:54.545 ***** 2026-02-19 03:35:43.980344 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42) 2026-02-19 03:35:43.980363 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42) 2026-02-19 03:35:43.980374 | orchestrator | 2026-02-19 03:35:43.980384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980395 | orchestrator | Thursday 19 February 2026 03:35:42 +0000 (0:00:00.462) 0:00:55.008 ***** 2026-02-19 03:35:43.980437 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46) 2026-02-19 03:35:43.980458 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46) 2026-02-19 03:35:43.980476 | orchestrator | 2026-02-19 03:35:43.980494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980511 | orchestrator | Thursday 19 February 2026 03:35:42 +0000 (0:00:00.458) 0:00:55.467 ***** 2026-02-19 03:35:43.980528 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b) 2026-02-19 03:35:43.980547 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b) 2026-02-19 03:35:43.980565 | orchestrator | 2026-02-19 03:35:43.980581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-19 03:35:43.980599 | orchestrator | Thursday 19 February 2026 03:35:43 +0000 (0:00:00.409) 0:00:55.876 ***** 2026-02-19 03:35:43.980618 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-19 03:35:43.980636 | orchestrator | 2026-02-19 03:35:43.980654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:43.980672 | orchestrator | Thursday 19 February 2026 03:35:43 +0000 (0:00:00.323) 0:00:56.199 ***** 2026-02-19 03:35:43.980691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-19 03:35:43.980708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-19 03:35:43.980727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-19 03:35:43.980746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-19 03:35:43.980764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-19 03:35:43.980782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-19 03:35:43.980799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-19 03:35:43.980819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-19 03:35:43.980838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-19 03:35:43.980857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-19 03:35:43.980875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-19 03:35:43.980905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-19 03:35:52.881031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-19 03:35:52.881164 | orchestrator | 2026-02-19 03:35:52.881180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881190 | orchestrator | Thursday 19 February 2026 03:35:43 +0000 (0:00:00.381) 0:00:56.580 ***** 2026-02-19 03:35:52.881200 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881210 | orchestrator | 2026-02-19 03:35:52.881219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881243 | orchestrator | Thursday 19 February 2026 03:35:44 +0000 (0:00:00.183) 0:00:56.764 ***** 2026-02-19 03:35:52.881262 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881291 | orchestrator | 2026-02-19 03:35:52.881301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881310 | orchestrator | Thursday 19 February 2026 03:35:44 +0000 (0:00:00.193) 0:00:56.958 ***** 2026-02-19 03:35:52.881319 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881328 | orchestrator | 2026-02-19 03:35:52.881337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881346 | orchestrator | Thursday 19 February 2026 03:35:44 +0000 (0:00:00.187) 0:00:57.146 ***** 2026-02-19 03:35:52.881355 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881364 | orchestrator | 2026-02-19 03:35:52.881373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881382 | orchestrator | Thursday 19 February 2026 03:35:44 +0000 (0:00:00.192) 0:00:57.338 ***** 2026-02-19 03:35:52.881391 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881399 | orchestrator | 2026-02-19 03:35:52.881456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881468 | orchestrator | Thursday 19 February 2026 03:35:45 +0000 (0:00:00.631) 0:00:57.970 ***** 2026-02-19 03:35:52.881477 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881485 | orchestrator | 2026-02-19 03:35:52.881494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881503 | orchestrator | Thursday 19 February 2026 03:35:45 +0000 (0:00:00.223) 0:00:58.193 ***** 2026-02-19 03:35:52.881511 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881520 | orchestrator | 2026-02-19 03:35:52.881528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881537 | orchestrator | Thursday 19 February 2026 03:35:45 +0000 (0:00:00.227) 0:00:58.421 ***** 2026-02-19 03:35:52.881546 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881555 | orchestrator | 2026-02-19 03:35:52.881564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881574 | orchestrator | Thursday 19 February 2026 03:35:46 +0000 (0:00:00.207) 0:00:58.628 ***** 2026-02-19 03:35:52.881584 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-19 03:35:52.881595 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-19 03:35:52.881605 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-19 03:35:52.881614 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-19 03:35:52.881624 | orchestrator | 2026-02-19 03:35:52.881634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881643 | orchestrator | Thursday 19 February 2026 03:35:46 +0000 (0:00:00.698) 0:00:59.326 ***** 2026-02-19 03:35:52.881653 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881667 | orchestrator | 2026-02-19 03:35:52.881682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881697 | orchestrator | Thursday 19 February 2026 03:35:46 +0000 (0:00:00.225) 0:00:59.552 ***** 2026-02-19 03:35:52.881712 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881726 | orchestrator | 2026-02-19 03:35:52.881741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881756 | orchestrator | Thursday 19 February 2026 03:35:47 +0000 (0:00:00.215) 0:00:59.767 ***** 2026-02-19 03:35:52.881770 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881784 | orchestrator | 2026-02-19 03:35:52.881799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-19 03:35:52.881812 | orchestrator | Thursday 19 February 2026 03:35:47 +0000 (0:00:00.210) 0:00:59.978 ***** 2026-02-19 03:35:52.881828 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881842 | orchestrator | 2026-02-19 03:35:52.881858 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-19 03:35:52.881874 | orchestrator | Thursday 19 February 2026 03:35:47 +0000 (0:00:00.202) 0:01:00.181 ***** 2026-02-19 03:35:52.881890 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.881907 | orchestrator | 2026-02-19 03:35:52.881938 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-19 03:35:52.881955 | orchestrator | Thursday 19 February 2026 03:35:47 +0000 (0:00:00.140) 0:01:00.321 ***** 2026-02-19 03:35:52.881970 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98b2861f-503b-5d91-adc9-6468e68ac210'}}) 2026-02-19 03:35:52.881999 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3bb39c06-9317-5e70-9108-eeec2efc4456'}}) 2026-02-19 03:35:52.882013 | orchestrator | 2026-02-19 03:35:52.882075 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-19 03:35:52.882089 | orchestrator | Thursday 19 February 2026 03:35:47 +0000 (0:00:00.215) 0:01:00.537 ***** 2026-02-19 03:35:52.882106 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}) 2026-02-19 03:35:52.882123 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}) 2026-02-19 03:35:52.882138 | orchestrator | 2026-02-19 03:35:52.882152 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-19 03:35:52.882190 | orchestrator | Thursday 19 February 2026 03:35:49 +0000 (0:00:02.015) 0:01:02.552 ***** 2026-02-19 03:35:52.882206 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:52.882222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:52.882236 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.882251 | orchestrator | 2026-02-19 03:35:52.882274 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-19 03:35:52.882289 | orchestrator | Thursday 19 February 2026 03:35:50 +0000 (0:00:00.379) 0:01:02.932 ***** 2026-02-19 03:35:52.882305 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}) 2026-02-19 03:35:52.882320 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}) 2026-02-19 03:35:52.882335 | orchestrator | 2026-02-19 03:35:52.882345 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-19 03:35:52.882354 | orchestrator | Thursday 19 February 2026 03:35:51 +0000 (0:00:01.295) 0:01:04.227 ***** 2026-02-19 03:35:52.882362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:52.882371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:52.882380 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.882388 | orchestrator | 2026-02-19 03:35:52.882397 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-19 03:35:52.882406 | orchestrator | Thursday 19 February 2026 03:35:51 +0000 (0:00:00.146) 0:01:04.373 ***** 2026-02-19 03:35:52.882494 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.882503 | orchestrator | 2026-02-19 03:35:52.882512 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-19 03:35:52.882521 | orchestrator | Thursday 19 February 2026 03:35:51 +0000 (0:00:00.132) 0:01:04.506 ***** 2026-02-19 03:35:52.882529 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:52.882538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:52.882556 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.882564 | orchestrator | 2026-02-19 03:35:52.882573 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-19 03:35:52.882581 | orchestrator | Thursday 19 February 2026 03:35:52 +0000 (0:00:00.147) 0:01:04.654 ***** 2026-02-19 03:35:52.882590 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.882598 | orchestrator | 2026-02-19 03:35:52.882607 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-19 03:35:52.882615 | orchestrator | Thursday 19 February 2026 03:35:52 +0000 (0:00:00.133) 0:01:04.788 ***** 2026-02-19 03:35:52.882624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:52.882632 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:52.882641 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.882649 | orchestrator | 2026-02-19 03:35:52.882658 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-19 03:35:52.882667 | orchestrator | Thursday 19 February 2026 03:35:52 +0000 (0:00:00.142) 0:01:04.930 ***** 2026-02-19 03:35:52.882675 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.882684 | orchestrator | 2026-02-19 03:35:52.882692 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-19 03:35:52.882701 | orchestrator | Thursday 19 February 2026 03:35:52 +0000 (0:00:00.134) 0:01:05.064 ***** 2026-02-19 03:35:52.882710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:52.882718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:52.882727 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:52.882736 | orchestrator | 2026-02-19 03:35:52.882744 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-19 03:35:52.882753 | orchestrator | Thursday 19 February 2026 03:35:52 +0000 (0:00:00.160) 0:01:05.224 ***** 2026-02-19 03:35:52.882762 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:35:52.882771 | orchestrator | 2026-02-19 03:35:52.882779 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-19 03:35:52.882788 | orchestrator | Thursday 19 February 2026 03:35:52 +0000 (0:00:00.122) 0:01:05.347 ***** 2026-02-19 03:35:52.882805 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:58.719755 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:58.719864 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.719878 | orchestrator | 2026-02-19 03:35:58.719890 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-19 03:35:58.719900 | orchestrator | Thursday 19 February 2026 03:35:52 +0000 (0:00:00.141) 0:01:05.489 ***** 2026-02-19 03:35:58.719925 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:58.719935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:58.719944 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.719953 | orchestrator | 2026-02-19 03:35:58.719962 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-19 03:35:58.719971 | orchestrator | Thursday 19 February 2026 03:35:53 +0000 (0:00:00.141) 0:01:05.630 ***** 2026-02-19 03:35:58.719998 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:58.720007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:58.720016 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720024 | orchestrator | 2026-02-19 03:35:58.720033 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-19 03:35:58.720042 | orchestrator | Thursday 19 February 2026 03:35:53 +0000 (0:00:00.291) 0:01:05.921 ***** 2026-02-19 03:35:58.720050 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720059 | orchestrator | 2026-02-19 03:35:58.720067 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-19 03:35:58.720076 | orchestrator | Thursday 19 February 2026 03:35:53 +0000 (0:00:00.130) 0:01:06.052 ***** 2026-02-19 03:35:58.720084 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720094 | orchestrator | 2026-02-19 03:35:58.720103 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-19 03:35:58.720111 | orchestrator | Thursday 19 February 2026 03:35:53 +0000 (0:00:00.131) 0:01:06.183 ***** 2026-02-19 03:35:58.720120 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720128 | orchestrator | 2026-02-19 03:35:58.720137 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-19 03:35:58.720146 | orchestrator | Thursday 19 February 2026 03:35:53 +0000 (0:00:00.133) 0:01:06.317 ***** 2026-02-19 03:35:58.720154 | orchestrator | ok: [testbed-node-5] => { 2026-02-19 03:35:58.720163 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-19 03:35:58.720172 | orchestrator | } 2026-02-19 03:35:58.720181 | orchestrator | 2026-02-19 03:35:58.720189 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-19 03:35:58.720198 | orchestrator | Thursday 19 February 2026 03:35:53 +0000 (0:00:00.133) 0:01:06.451 ***** 2026-02-19 03:35:58.720207 | orchestrator | ok: [testbed-node-5] => { 2026-02-19 03:35:58.720215 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-19 03:35:58.720224 | orchestrator | } 2026-02-19 03:35:58.720232 | orchestrator | 2026-02-19 03:35:58.720241 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-19 03:35:58.720250 | orchestrator | Thursday 19 February 2026 03:35:53 +0000 (0:00:00.134) 0:01:06.586 ***** 2026-02-19 03:35:58.720258 | orchestrator | ok: [testbed-node-5] => { 2026-02-19 03:35:58.720268 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-19 03:35:58.720278 | orchestrator | } 2026-02-19 03:35:58.720287 | orchestrator | 2026-02-19 03:35:58.720297 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-19 03:35:58.720307 | orchestrator | Thursday 19 February 2026 03:35:54 +0000 (0:00:00.146) 0:01:06.732 ***** 2026-02-19 03:35:58.720316 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:35:58.720326 | orchestrator | 2026-02-19 03:35:58.720336 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-19 03:35:58.720346 | orchestrator | Thursday 19 February 2026 03:35:54 +0000 (0:00:00.513) 0:01:07.246 ***** 2026-02-19 03:35:58.720356 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:35:58.720366 | orchestrator | 2026-02-19 03:35:58.720377 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-19 03:35:58.720387 | orchestrator | Thursday 19 February 2026 03:35:55 +0000 (0:00:00.520) 0:01:07.767 ***** 2026-02-19 03:35:58.720397 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:35:58.720431 | orchestrator | 2026-02-19 03:35:58.720442 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-19 03:35:58.720451 | orchestrator | Thursday 19 February 2026 03:35:55 +0000 (0:00:00.518) 0:01:08.285 ***** 2026-02-19 03:35:58.720461 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:35:58.720470 | orchestrator | 2026-02-19 03:35:58.720480 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-19 03:35:58.720495 | orchestrator | Thursday 19 February 2026 03:35:55 +0000 (0:00:00.144) 0:01:08.429 ***** 2026-02-19 03:35:58.720505 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720515 | orchestrator | 2026-02-19 03:35:58.720525 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-19 03:35:58.720535 | orchestrator | Thursday 19 February 2026 03:35:55 +0000 (0:00:00.101) 0:01:08.531 ***** 2026-02-19 03:35:58.720544 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720554 | orchestrator | 2026-02-19 03:35:58.720564 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-19 03:35:58.720574 | orchestrator | Thursday 19 February 2026 03:35:56 +0000 (0:00:00.289) 0:01:08.821 ***** 2026-02-19 03:35:58.720584 | orchestrator | ok: [testbed-node-5] => { 2026-02-19 03:35:58.720594 | orchestrator |  "vgs_report": { 2026-02-19 03:35:58.720605 | orchestrator |  "vg": [] 2026-02-19 03:35:58.720631 | orchestrator |  } 2026-02-19 03:35:58.720641 | orchestrator | } 2026-02-19 03:35:58.720649 | orchestrator | 2026-02-19 03:35:58.720658 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-19 03:35:58.720667 | orchestrator | Thursday 19 February 2026 03:35:56 +0000 (0:00:00.120) 0:01:08.941 ***** 2026-02-19 03:35:58.720675 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720684 | orchestrator | 2026-02-19 03:35:58.720692 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-19 03:35:58.720700 | orchestrator | Thursday 19 February 2026 03:35:56 +0000 (0:00:00.133) 0:01:09.075 ***** 2026-02-19 03:35:58.720714 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720722 | orchestrator | 2026-02-19 03:35:58.720731 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-19 03:35:58.720739 | orchestrator | Thursday 19 February 2026 03:35:56 +0000 (0:00:00.133) 0:01:09.209 ***** 2026-02-19 03:35:58.720748 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720756 | orchestrator | 2026-02-19 03:35:58.720765 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-19 03:35:58.720773 | orchestrator | Thursday 19 February 2026 03:35:56 +0000 (0:00:00.119) 0:01:09.328 ***** 2026-02-19 03:35:58.720782 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720790 | orchestrator | 2026-02-19 03:35:58.720798 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-19 03:35:58.720807 | orchestrator | Thursday 19 February 2026 03:35:56 +0000 (0:00:00.114) 0:01:09.443 ***** 2026-02-19 03:35:58.720815 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720824 | orchestrator | 2026-02-19 03:35:58.720832 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-19 03:35:58.720841 | orchestrator | Thursday 19 February 2026 03:35:56 +0000 (0:00:00.121) 0:01:09.565 ***** 2026-02-19 03:35:58.720849 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720858 | orchestrator | 2026-02-19 03:35:58.720866 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-19 03:35:58.720874 | orchestrator | Thursday 19 February 2026 03:35:57 +0000 (0:00:00.117) 0:01:09.682 ***** 2026-02-19 03:35:58.720883 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720891 | orchestrator | 2026-02-19 03:35:58.720900 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-19 03:35:58.720909 | orchestrator | Thursday 19 February 2026 03:35:57 +0000 (0:00:00.127) 0:01:09.810 ***** 2026-02-19 03:35:58.720917 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720925 | orchestrator | 2026-02-19 03:35:58.720934 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-19 03:35:58.720942 | orchestrator | Thursday 19 February 2026 03:35:57 +0000 (0:00:00.139) 0:01:09.949 ***** 2026-02-19 03:35:58.720951 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720959 | orchestrator | 2026-02-19 03:35:58.720968 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-19 03:35:58.720977 | orchestrator | Thursday 19 February 2026 03:35:57 +0000 (0:00:00.127) 0:01:10.077 ***** 2026-02-19 03:35:58.720990 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.720999 | orchestrator | 2026-02-19 03:35:58.721007 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-19 03:35:58.721016 | orchestrator | Thursday 19 February 2026 03:35:57 +0000 (0:00:00.123) 0:01:10.200 ***** 2026-02-19 03:35:58.721025 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.721033 | orchestrator | 2026-02-19 03:35:58.721041 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-19 03:35:58.721050 | orchestrator | Thursday 19 February 2026 03:35:57 +0000 (0:00:00.244) 0:01:10.445 ***** 2026-02-19 03:35:58.721058 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.721067 | orchestrator | 2026-02-19 03:35:58.721075 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-19 03:35:58.721084 | orchestrator | Thursday 19 February 2026 03:35:57 +0000 (0:00:00.116) 0:01:10.562 ***** 2026-02-19 03:35:58.721092 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.721100 | orchestrator | 2026-02-19 03:35:58.721109 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-19 03:35:58.721117 | orchestrator | Thursday 19 February 2026 03:35:58 +0000 (0:00:00.112) 0:01:10.675 ***** 2026-02-19 03:35:58.721126 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.721134 | orchestrator | 2026-02-19 03:35:58.721142 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-19 03:35:58.721151 | orchestrator | Thursday 19 February 2026 03:35:58 +0000 (0:00:00.137) 0:01:10.812 ***** 2026-02-19 03:35:58.721159 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:58.721168 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:58.721177 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.721185 | orchestrator | 2026-02-19 03:35:58.721193 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-19 03:35:58.721202 | orchestrator | Thursday 19 February 2026 03:35:58 +0000 (0:00:00.158) 0:01:10.971 ***** 2026-02-19 03:35:58.721210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:35:58.721219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:35:58.721227 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:35:58.721236 | orchestrator | 2026-02-19 03:35:58.721244 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-19 03:35:58.721253 | orchestrator | Thursday 19 February 2026 03:35:58 +0000 (0:00:00.175) 0:01:11.147 ***** 2026-02-19 03:35:58.721267 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.664931 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665035 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665051 | orchestrator | 2026-02-19 03:36:01.665081 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-19 03:36:01.665094 | orchestrator | Thursday 19 February 2026 03:35:58 +0000 (0:00:00.180) 0:01:11.327 ***** 2026-02-19 03:36:01.665105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.665116 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665152 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665162 | orchestrator | 2026-02-19 03:36:01.665173 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-19 03:36:01.665184 | orchestrator | Thursday 19 February 2026 03:35:58 +0000 (0:00:00.146) 0:01:11.474 ***** 2026-02-19 03:36:01.665194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.665205 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665216 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665226 | orchestrator | 2026-02-19 03:36:01.665236 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-19 03:36:01.665246 | orchestrator | Thursday 19 February 2026 03:35:59 +0000 (0:00:00.154) 0:01:11.629 ***** 2026-02-19 03:36:01.665256 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.665266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665276 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665287 | orchestrator | 2026-02-19 03:36:01.665297 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-19 03:36:01.665307 | orchestrator | Thursday 19 February 2026 03:35:59 +0000 (0:00:00.151) 0:01:11.781 ***** 2026-02-19 03:36:01.665317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.665327 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665338 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665348 | orchestrator | 2026-02-19 03:36:01.665358 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-19 03:36:01.665368 | orchestrator | Thursday 19 February 2026 03:35:59 +0000 (0:00:00.157) 0:01:11.938 ***** 2026-02-19 03:36:01.665379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.665390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665400 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665436 | orchestrator | 2026-02-19 03:36:01.665446 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-19 03:36:01.665456 | orchestrator | Thursday 19 February 2026 03:35:59 +0000 (0:00:00.149) 0:01:12.088 ***** 2026-02-19 03:36:01.665467 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:01.665478 | orchestrator | 2026-02-19 03:36:01.665489 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-19 03:36:01.665500 | orchestrator | Thursday 19 February 2026 03:36:00 +0000 (0:00:00.681) 0:01:12.769 ***** 2026-02-19 03:36:01.665510 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:01.665521 | orchestrator | 2026-02-19 03:36:01.665531 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-19 03:36:01.665543 | orchestrator | Thursday 19 February 2026 03:36:00 +0000 (0:00:00.574) 0:01:13.344 ***** 2026-02-19 03:36:01.665554 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:01.665564 | orchestrator | 2026-02-19 03:36:01.665574 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-19 03:36:01.665585 | orchestrator | Thursday 19 February 2026 03:36:00 +0000 (0:00:00.140) 0:01:13.484 ***** 2026-02-19 03:36:01.665606 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'vg_name': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}) 2026-02-19 03:36:01.665618 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'vg_name': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}) 2026-02-19 03:36:01.665629 | orchestrator | 2026-02-19 03:36:01.665639 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-19 03:36:01.665650 | orchestrator | Thursday 19 February 2026 03:36:01 +0000 (0:00:00.161) 0:01:13.646 ***** 2026-02-19 03:36:01.665679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.665697 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665709 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665720 | orchestrator | 2026-02-19 03:36:01.665732 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-19 03:36:01.665742 | orchestrator | Thursday 19 February 2026 03:36:01 +0000 (0:00:00.149) 0:01:13.796 ***** 2026-02-19 03:36:01.665753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.665764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665774 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665784 | orchestrator | 2026-02-19 03:36:01.665795 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-19 03:36:01.665806 | orchestrator | Thursday 19 February 2026 03:36:01 +0000 (0:00:00.153) 0:01:13.949 ***** 2026-02-19 03:36:01.665816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 03:36:01.665826 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 03:36:01.665836 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:01.665845 | orchestrator | 2026-02-19 03:36:01.665855 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-19 03:36:01.665865 | orchestrator | Thursday 19 February 2026 03:36:01 +0000 (0:00:00.153) 0:01:14.103 ***** 2026-02-19 03:36:01.665875 | orchestrator | ok: [testbed-node-5] => { 2026-02-19 03:36:01.665885 | orchestrator |  "lvm_report": { 2026-02-19 03:36:01.665895 | orchestrator |  "lv": [ 2026-02-19 03:36:01.665905 | orchestrator |  { 2026-02-19 03:36:01.665916 | orchestrator |  "lv_name": "osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456", 2026-02-19 03:36:01.665927 | orchestrator |  "vg_name": "ceph-3bb39c06-9317-5e70-9108-eeec2efc4456" 2026-02-19 03:36:01.665937 | orchestrator |  }, 2026-02-19 03:36:01.665947 | orchestrator |  { 2026-02-19 03:36:01.665957 | orchestrator |  "lv_name": "osd-block-98b2861f-503b-5d91-adc9-6468e68ac210", 2026-02-19 03:36:01.665967 | orchestrator |  "vg_name": "ceph-98b2861f-503b-5d91-adc9-6468e68ac210" 2026-02-19 03:36:01.665978 | orchestrator |  } 2026-02-19 03:36:01.665988 | orchestrator |  ], 2026-02-19 03:36:01.665998 | orchestrator |  "pv": [ 2026-02-19 03:36:01.666008 | orchestrator |  { 2026-02-19 03:36:01.666067 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-19 03:36:01.666079 | orchestrator |  "vg_name": "ceph-98b2861f-503b-5d91-adc9-6468e68ac210" 2026-02-19 03:36:01.666090 | orchestrator |  }, 2026-02-19 03:36:01.666101 | orchestrator |  { 2026-02-19 03:36:01.666112 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-19 03:36:01.666134 | orchestrator |  "vg_name": "ceph-3bb39c06-9317-5e70-9108-eeec2efc4456" 2026-02-19 03:36:01.666145 | orchestrator |  } 2026-02-19 03:36:01.666155 | orchestrator |  ] 2026-02-19 03:36:01.666166 | orchestrator |  } 2026-02-19 03:36:01.666177 | orchestrator | } 2026-02-19 03:36:01.666188 | orchestrator | 2026-02-19 03:36:01.666199 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:36:01.666210 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-19 03:36:01.666220 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-19 03:36:01.666231 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-19 03:36:01.666241 | orchestrator | 2026-02-19 03:36:01.666252 | orchestrator | 2026-02-19 03:36:01.666263 | orchestrator | 2026-02-19 03:36:01.666274 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:36:01.666285 | orchestrator | Thursday 19 February 2026 03:36:01 +0000 (0:00:00.144) 0:01:14.247 ***** 2026-02-19 03:36:01.666295 | orchestrator | =============================================================================== 2026-02-19 03:36:01.666306 | orchestrator | Create block VGs -------------------------------------------------------- 6.11s 2026-02-19 03:36:01.666316 | orchestrator | Create block LVs -------------------------------------------------------- 4.30s 2026-02-19 03:36:01.666327 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.82s 2026-02-19 03:36:01.666338 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.80s 2026-02-19 03:36:01.666349 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.75s 2026-02-19 03:36:01.666360 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.65s 2026-02-19 03:36:01.666370 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.59s 2026-02-19 03:36:01.666381 | orchestrator | Add known links to the list of available block devices ------------------ 1.33s 2026-02-19 03:36:01.666399 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-02-19 03:36:01.924618 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.05s 2026-02-19 03:36:01.924696 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-02-19 03:36:01.924704 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-02-19 03:36:01.924732 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-02-19 03:36:01.924739 | orchestrator | Print LVM report data --------------------------------------------------- 0.72s 2026-02-19 03:36:01.924745 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.72s 2026-02-19 03:36:01.924751 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.71s 2026-02-19 03:36:01.924757 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2026-02-19 03:36:01.924762 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-02-19 03:36:01.924769 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-19 03:36:01.924774 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.68s 2026-02-19 03:36:14.052167 | orchestrator | 2026-02-19 03:36:14 | INFO  | Task 5c4d7526-60d4-43e3-a3b8-a40f455a0257 (facts) was prepared for execution. 2026-02-19 03:36:14.052338 | orchestrator | 2026-02-19 03:36:14 | INFO  | It takes a moment until task 5c4d7526-60d4-43e3-a3b8-a40f455a0257 (facts) has been started and output is visible here. 2026-02-19 03:36:26.979199 | orchestrator | 2026-02-19 03:36:26.979364 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-19 03:36:26.979535 | orchestrator | 2026-02-19 03:36:26.979558 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-19 03:36:26.979576 | orchestrator | Thursday 19 February 2026 03:36:18 +0000 (0:00:00.352) 0:00:00.352 ***** 2026-02-19 03:36:26.979593 | orchestrator | ok: [testbed-manager] 2026-02-19 03:36:26.979612 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:26.979628 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:26.979644 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:26.979659 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:26.979675 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:26.979691 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:26.979708 | orchestrator | 2026-02-19 03:36:26.979725 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-19 03:36:26.979742 | orchestrator | Thursday 19 February 2026 03:36:19 +0000 (0:00:01.130) 0:00:01.482 ***** 2026-02-19 03:36:26.979758 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:36:26.979776 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:26.979792 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:26.979807 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:26.979823 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:26.979838 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:26.979854 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:26.979870 | orchestrator | 2026-02-19 03:36:26.979886 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-19 03:36:26.979902 | orchestrator | 2026-02-19 03:36:26.979917 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-19 03:36:26.979933 | orchestrator | Thursday 19 February 2026 03:36:20 +0000 (0:00:01.167) 0:00:02.650 ***** 2026-02-19 03:36:26.979950 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:26.979966 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:26.979982 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:26.979998 | orchestrator | ok: [testbed-manager] 2026-02-19 03:36:26.980014 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:26.980030 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:26.980046 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:26.980061 | orchestrator | 2026-02-19 03:36:26.980076 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-19 03:36:26.980092 | orchestrator | 2026-02-19 03:36:26.980107 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-19 03:36:26.980122 | orchestrator | Thursday 19 February 2026 03:36:26 +0000 (0:00:05.379) 0:00:08.029 ***** 2026-02-19 03:36:26.980137 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:36:26.980152 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:26.980169 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:26.980185 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:26.980202 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:26.980217 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:26.980233 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:26.980248 | orchestrator | 2026-02-19 03:36:26.980265 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:36:26.980282 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:36:26.980300 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:36:26.980315 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:36:26.980332 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:36:26.980348 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:36:26.980383 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:36:26.980421 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:36:26.980437 | orchestrator | 2026-02-19 03:36:26.980452 | orchestrator | 2026-02-19 03:36:26.980467 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:36:26.980504 | orchestrator | Thursday 19 February 2026 03:36:26 +0000 (0:00:00.512) 0:00:08.541 ***** 2026-02-19 03:36:26.980519 | orchestrator | =============================================================================== 2026-02-19 03:36:26.980533 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.38s 2026-02-19 03:36:26.980548 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2026-02-19 03:36:26.980563 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-02-19 03:36:26.980576 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-02-19 03:36:29.013069 | orchestrator | 2026-02-19 03:36:29 | INFO  | Task a3aabe98-d69b-4151-bdf1-7d80dc5060b9 (ceph) was prepared for execution. 2026-02-19 03:36:29.013204 | orchestrator | 2026-02-19 03:36:29 | INFO  | It takes a moment until task a3aabe98-d69b-4151-bdf1-7d80dc5060b9 (ceph) has been started and output is visible here. 2026-02-19 03:36:46.111756 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-19 03:36:46.111854 | orchestrator | 2.16.14 2026-02-19 03:36:46.111873 | orchestrator | 2026-02-19 03:36:46.111887 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-19 03:36:46.111900 | orchestrator | 2026-02-19 03:36:46.111909 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 03:36:46.111916 | orchestrator | Thursday 19 February 2026 03:36:33 +0000 (0:00:00.783) 0:00:00.783 ***** 2026-02-19 03:36:46.111924 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:36:46.111930 | orchestrator | 2026-02-19 03:36:46.111937 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 03:36:46.111943 | orchestrator | Thursday 19 February 2026 03:36:34 +0000 (0:00:01.061) 0:00:01.844 ***** 2026-02-19 03:36:46.111949 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:46.111956 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:46.111962 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:46.111968 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:46.111974 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:46.111982 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:46.111993 | orchestrator | 2026-02-19 03:36:46.112005 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 03:36:46.112020 | orchestrator | Thursday 19 February 2026 03:36:36 +0000 (0:00:01.192) 0:00:03.037 ***** 2026-02-19 03:36:46.112030 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:46.112040 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:46.112049 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:46.112059 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:46.112069 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:46.112079 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:46.112088 | orchestrator | 2026-02-19 03:36:46.112097 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 03:36:46.112107 | orchestrator | Thursday 19 February 2026 03:36:36 +0000 (0:00:00.655) 0:00:03.692 ***** 2026-02-19 03:36:46.112117 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:46.112127 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:46.112137 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:46.112147 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:46.112184 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:46.112195 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:46.112203 | orchestrator | 2026-02-19 03:36:46.112210 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 03:36:46.112216 | orchestrator | Thursday 19 February 2026 03:36:37 +0000 (0:00:00.876) 0:00:04.569 ***** 2026-02-19 03:36:46.112222 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:46.112229 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:46.112235 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:46.112241 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:46.112247 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:46.112253 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:46.112259 | orchestrator | 2026-02-19 03:36:46.112265 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 03:36:46.112272 | orchestrator | Thursday 19 February 2026 03:36:38 +0000 (0:00:00.678) 0:00:05.247 ***** 2026-02-19 03:36:46.112279 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:46.112286 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:46.112293 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:46.112300 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:46.112307 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:46.112314 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:46.112320 | orchestrator | 2026-02-19 03:36:46.112327 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 03:36:46.112334 | orchestrator | Thursday 19 February 2026 03:36:38 +0000 (0:00:00.573) 0:00:05.821 ***** 2026-02-19 03:36:46.112341 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:46.112348 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:46.112355 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:46.112362 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:46.112368 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:46.112375 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:46.112385 | orchestrator | 2026-02-19 03:36:46.112421 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 03:36:46.112429 | orchestrator | Thursday 19 February 2026 03:36:39 +0000 (0:00:00.985) 0:00:06.807 ***** 2026-02-19 03:36:46.112437 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:46.112445 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:46.112452 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:46.112459 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:46.112467 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:46.112474 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:46.112481 | orchestrator | 2026-02-19 03:36:46.112488 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 03:36:46.112496 | orchestrator | Thursday 19 February 2026 03:36:40 +0000 (0:00:00.628) 0:00:07.435 ***** 2026-02-19 03:36:46.112503 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:46.112510 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:46.112517 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:46.112524 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:46.112531 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:46.112551 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:46.112558 | orchestrator | 2026-02-19 03:36:46.112566 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 03:36:46.112573 | orchestrator | Thursday 19 February 2026 03:36:41 +0000 (0:00:00.779) 0:00:08.214 ***** 2026-02-19 03:36:46.112580 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:36:46.112588 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:36:46.112595 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:36:46.112602 | orchestrator | 2026-02-19 03:36:46.112609 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 03:36:46.112616 | orchestrator | Thursday 19 February 2026 03:36:41 +0000 (0:00:00.665) 0:00:08.880 ***** 2026-02-19 03:36:46.112631 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:46.112639 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:46.112646 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:46.112667 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:46.112674 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:46.112680 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:46.112686 | orchestrator | 2026-02-19 03:36:46.112693 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 03:36:46.112699 | orchestrator | Thursday 19 February 2026 03:36:42 +0000 (0:00:00.642) 0:00:09.522 ***** 2026-02-19 03:36:46.112705 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:36:46.112711 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:36:46.112717 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:36:46.112724 | orchestrator | 2026-02-19 03:36:46.112730 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 03:36:46.112736 | orchestrator | Thursday 19 February 2026 03:36:44 +0000 (0:00:02.210) 0:00:11.733 ***** 2026-02-19 03:36:46.112743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 03:36:46.112750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 03:36:46.112756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 03:36:46.112762 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:46.112768 | orchestrator | 2026-02-19 03:36:46.112775 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 03:36:46.112781 | orchestrator | Thursday 19 February 2026 03:36:45 +0000 (0:00:00.379) 0:00:12.113 ***** 2026-02-19 03:36:46.112789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 03:36:46.112798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 03:36:46.112805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 03:36:46.112811 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:46.112818 | orchestrator | 2026-02-19 03:36:46.112824 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 03:36:46.112830 | orchestrator | Thursday 19 February 2026 03:36:45 +0000 (0:00:00.582) 0:00:12.695 ***** 2026-02-19 03:36:46.112838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:46.112847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:46.112853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:46.112866 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:46.112873 | orchestrator | 2026-02-19 03:36:46.112883 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 03:36:46.112889 | orchestrator | Thursday 19 February 2026 03:36:45 +0000 (0:00:00.154) 0:00:12.850 ***** 2026-02-19 03:36:46.112904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 03:36:43.387843', 'end': '2026-02-19 03:36:43.428758', 'delta': '0:00:00.040915', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 03:36:54.789507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 03:36:43.931510', 'end': '2026-02-19 03:36:43.982614', 'delta': '0:00:00.051104', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 03:36:54.789626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 03:36:44.494393', 'end': '2026-02-19 03:36:44.533679', 'delta': '0:00:00.039286', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 03:36:54.789645 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.789663 | orchestrator | 2026-02-19 03:36:54.789680 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 03:36:54.789695 | orchestrator | Thursday 19 February 2026 03:36:46 +0000 (0:00:00.173) 0:00:13.024 ***** 2026-02-19 03:36:54.789710 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:36:54.789725 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:36:54.789740 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:36:54.789754 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:36:54.789768 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:36:54.789783 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:36:54.789798 | orchestrator | 2026-02-19 03:36:54.789811 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 03:36:54.789825 | orchestrator | Thursday 19 February 2026 03:36:46 +0000 (0:00:00.675) 0:00:13.699 ***** 2026-02-19 03:36:54.789839 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:36:54.789853 | orchestrator | 2026-02-19 03:36:54.789870 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 03:36:54.789885 | orchestrator | Thursday 19 February 2026 03:36:47 +0000 (0:00:00.845) 0:00:14.544 ***** 2026-02-19 03:36:54.789925 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.789940 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.789954 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.789969 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.789985 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.790001 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.790069 | orchestrator | 2026-02-19 03:36:54.790085 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 03:36:54.790098 | orchestrator | Thursday 19 February 2026 03:36:48 +0000 (0:00:00.812) 0:00:15.357 ***** 2026-02-19 03:36:54.790111 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.790124 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.790136 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.790149 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.790190 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.790203 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.790216 | orchestrator | 2026-02-19 03:36:54.790229 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 03:36:54.790242 | orchestrator | Thursday 19 February 2026 03:36:49 +0000 (0:00:00.992) 0:00:16.349 ***** 2026-02-19 03:36:54.790254 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.790267 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.790280 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.790292 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.790305 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.790332 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.790344 | orchestrator | 2026-02-19 03:36:54.790356 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 03:36:54.790367 | orchestrator | Thursday 19 February 2026 03:36:49 +0000 (0:00:00.549) 0:00:16.899 ***** 2026-02-19 03:36:54.790379 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.790477 | orchestrator | 2026-02-19 03:36:54.790491 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 03:36:54.790503 | orchestrator | Thursday 19 February 2026 03:36:50 +0000 (0:00:00.112) 0:00:17.012 ***** 2026-02-19 03:36:54.790529 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.790552 | orchestrator | 2026-02-19 03:36:54.790564 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 03:36:54.790576 | orchestrator | Thursday 19 February 2026 03:36:50 +0000 (0:00:00.214) 0:00:17.226 ***** 2026-02-19 03:36:54.790589 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.790601 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.790613 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.790625 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.790637 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.790650 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.790663 | orchestrator | 2026-02-19 03:36:54.790693 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 03:36:54.790706 | orchestrator | Thursday 19 February 2026 03:36:51 +0000 (0:00:00.719) 0:00:17.946 ***** 2026-02-19 03:36:54.790719 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.790731 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.790743 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.790755 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.790767 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.790779 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.790791 | orchestrator | 2026-02-19 03:36:54.790803 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 03:36:54.790816 | orchestrator | Thursday 19 February 2026 03:36:51 +0000 (0:00:00.569) 0:00:18.516 ***** 2026-02-19 03:36:54.790828 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.790840 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.790852 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.790873 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.790886 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.790897 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.790910 | orchestrator | 2026-02-19 03:36:54.790922 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 03:36:54.790934 | orchestrator | Thursday 19 February 2026 03:36:52 +0000 (0:00:00.693) 0:00:19.210 ***** 2026-02-19 03:36:54.790946 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.790958 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.790970 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.790982 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.790995 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.791007 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.791018 | orchestrator | 2026-02-19 03:36:54.791031 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 03:36:54.791043 | orchestrator | Thursday 19 February 2026 03:36:52 +0000 (0:00:00.523) 0:00:19.734 ***** 2026-02-19 03:36:54.791055 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.791068 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.791079 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.791092 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.791104 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.791116 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.791128 | orchestrator | 2026-02-19 03:36:54.791141 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 03:36:54.791153 | orchestrator | Thursday 19 February 2026 03:36:53 +0000 (0:00:00.653) 0:00:20.387 ***** 2026-02-19 03:36:54.791165 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.791177 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.791190 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.791202 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.791214 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.791226 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.791238 | orchestrator | 2026-02-19 03:36:54.791250 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 03:36:54.791263 | orchestrator | Thursday 19 February 2026 03:36:53 +0000 (0:00:00.517) 0:00:20.905 ***** 2026-02-19 03:36:54.791275 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:54.791288 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:54.791300 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:54.791312 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:54.791324 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:54.791336 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:54.791348 | orchestrator | 2026-02-19 03:36:54.791360 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 03:36:54.791373 | orchestrator | Thursday 19 February 2026 03:36:54 +0000 (0:00:00.690) 0:00:21.595 ***** 2026-02-19 03:36:54.791407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.791429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.791457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.896656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.896734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.896743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.896749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.896755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.896761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.896767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:54.896806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:54.896832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:54.896841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:54.896850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:54.896870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:54.896889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.008672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.008692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.008726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.008785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.194899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.194975 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:55.194984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.194991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.194996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.195017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.195032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.195036 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:55.195040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.195043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.195058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.195062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.195066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.195075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.195086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.195097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.368797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.368889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.368935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.368947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.368969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.368978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.368986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.368995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.369018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.369027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.369042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.369058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.369068 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:55.369077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.369086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.369099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.496752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.496889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.496907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.496919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.496931 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:55.496961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.497000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.497027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.497039 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:55.497047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.497055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.497068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.497076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.497083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.497090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.497097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.497111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:36:55.808109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.808185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:36:55.808194 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:36:55.808201 | orchestrator | 2026-02-19 03:36:55.808210 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 03:36:55.808219 | orchestrator | Thursday 19 February 2026 03:36:55 +0000 (0:00:00.897) 0:00:22.492 ***** 2026-02-19 03:36:55.808227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.808268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.808278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.808287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.808299 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.808307 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.808314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.808327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.808341 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854239 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:55.854312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098756 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098925 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098929 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.098974 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302237 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302505 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302559 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302579 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302599 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302645 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302677 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302696 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302766 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:36:56.302789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302807 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.302860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457498 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457588 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457610 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457640 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457675 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457720 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457736 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457744 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457758 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.457775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599466 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599552 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599631 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:36:56.599645 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599671 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599680 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599688 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599696 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599710 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599725 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599733 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.599750 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.816763 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.816886 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:36:56.816902 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:36:56.816914 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:36:56.816927 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.816940 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.816952 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.816963 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.816974 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.817020 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.817033 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.817045 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.817059 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:36:56.817094 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:37:08.911459 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:08.911575 | orchestrator | 2026-02-19 03:37:08.911592 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 03:37:08.911605 | orchestrator | Thursday 19 February 2026 03:36:56 +0000 (0:00:01.235) 0:00:23.728 ***** 2026-02-19 03:37:08.911617 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:08.911629 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:08.911639 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:08.911650 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:37:08.911661 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:37:08.911671 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:37:08.911682 | orchestrator | 2026-02-19 03:37:08.911693 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 03:37:08.911704 | orchestrator | Thursday 19 February 2026 03:36:57 +0000 (0:00:00.904) 0:00:24.633 ***** 2026-02-19 03:37:08.911715 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:08.911725 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:08.911736 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:08.911746 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:37:08.911757 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:37:08.911768 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:37:08.911778 | orchestrator | 2026-02-19 03:37:08.911789 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 03:37:08.911800 | orchestrator | Thursday 19 February 2026 03:36:58 +0000 (0:00:00.661) 0:00:25.295 ***** 2026-02-19 03:37:08.911811 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.911821 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:08.911832 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:08.911843 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:08.911854 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:08.911864 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:08.911875 | orchestrator | 2026-02-19 03:37:08.911886 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 03:37:08.911897 | orchestrator | Thursday 19 February 2026 03:36:58 +0000 (0:00:00.516) 0:00:25.811 ***** 2026-02-19 03:37:08.911908 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.911919 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:08.911930 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:08.911941 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:08.911952 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:08.911965 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:08.911978 | orchestrator | 2026-02-19 03:37:08.911990 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 03:37:08.912003 | orchestrator | Thursday 19 February 2026 03:36:59 +0000 (0:00:00.682) 0:00:26.494 ***** 2026-02-19 03:37:08.912016 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.912029 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:08.912041 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:08.912078 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:08.912091 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:08.912103 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:08.912116 | orchestrator | 2026-02-19 03:37:08.912129 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 03:37:08.912141 | orchestrator | Thursday 19 February 2026 03:37:00 +0000 (0:00:00.555) 0:00:27.049 ***** 2026-02-19 03:37:08.912154 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.912167 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:08.912180 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:08.912192 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:08.912202 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:08.912213 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:08.912224 | orchestrator | 2026-02-19 03:37:08.912234 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 03:37:08.912245 | orchestrator | Thursday 19 February 2026 03:37:00 +0000 (0:00:00.716) 0:00:27.765 ***** 2026-02-19 03:37:08.912256 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-19 03:37:08.912267 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-19 03:37:08.912278 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-19 03:37:08.912289 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-19 03:37:08.912299 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-19 03:37:08.912310 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-19 03:37:08.912321 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-19 03:37:08.912331 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 03:37:08.912342 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-19 03:37:08.912352 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-19 03:37:08.912363 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-19 03:37:08.912373 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 03:37:08.912413 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 03:37:08.912425 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 03:37:08.912436 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-19 03:37:08.912446 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-19 03:37:08.912457 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-19 03:37:08.912483 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 03:37:08.912494 | orchestrator | 2026-02-19 03:37:08.912505 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 03:37:08.912516 | orchestrator | Thursday 19 February 2026 03:37:02 +0000 (0:00:02.060) 0:00:29.826 ***** 2026-02-19 03:37:08.912526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 03:37:08.912537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 03:37:08.912548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 03:37:08.912559 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.912569 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 03:37:08.912580 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 03:37:08.912591 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 03:37:08.912618 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:08.912629 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 03:37:08.912640 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 03:37:08.912651 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 03:37:08.912661 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:08.912672 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 03:37:08.912682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 03:37:08.912701 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 03:37:08.912712 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:08.912722 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 03:37:08.912733 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 03:37:08.912743 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 03:37:08.912754 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:08.912764 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 03:37:08.912781 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 03:37:08.912800 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 03:37:08.912817 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:08.912837 | orchestrator | 2026-02-19 03:37:08.912856 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 03:37:08.912875 | orchestrator | Thursday 19 February 2026 03:37:04 +0000 (0:00:01.182) 0:00:31.008 ***** 2026-02-19 03:37:08.912894 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:08.912907 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:08.912917 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:08.912929 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:37:08.912940 | orchestrator | 2026-02-19 03:37:08.912951 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 03:37:08.912964 | orchestrator | Thursday 19 February 2026 03:37:05 +0000 (0:00:01.079) 0:00:32.088 ***** 2026-02-19 03:37:08.912975 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.912986 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:08.912997 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:08.913008 | orchestrator | 2026-02-19 03:37:08.913019 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 03:37:08.913029 | orchestrator | Thursday 19 February 2026 03:37:05 +0000 (0:00:00.416) 0:00:32.504 ***** 2026-02-19 03:37:08.913040 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.913051 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:08.913062 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:08.913072 | orchestrator | 2026-02-19 03:37:08.913083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 03:37:08.913094 | orchestrator | Thursday 19 February 2026 03:37:05 +0000 (0:00:00.394) 0:00:32.899 ***** 2026-02-19 03:37:08.913105 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.913116 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:08.913126 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:08.913137 | orchestrator | 2026-02-19 03:37:08.913148 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 03:37:08.913159 | orchestrator | Thursday 19 February 2026 03:37:06 +0000 (0:00:00.376) 0:00:33.276 ***** 2026-02-19 03:37:08.913170 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:08.913181 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:08.913197 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:08.913222 | orchestrator | 2026-02-19 03:37:08.913248 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 03:37:08.913267 | orchestrator | Thursday 19 February 2026 03:37:07 +0000 (0:00:00.789) 0:00:34.065 ***** 2026-02-19 03:37:08.913285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:37:08.913303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:37:08.913321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:37:08.913340 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.913359 | orchestrator | 2026-02-19 03:37:08.913377 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 03:37:08.913468 | orchestrator | Thursday 19 February 2026 03:37:07 +0000 (0:00:00.483) 0:00:34.548 ***** 2026-02-19 03:37:08.913492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:37:08.913504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:37:08.913514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:37:08.913525 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.913536 | orchestrator | 2026-02-19 03:37:08.913546 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 03:37:08.913557 | orchestrator | Thursday 19 February 2026 03:37:08 +0000 (0:00:00.417) 0:00:34.966 ***** 2026-02-19 03:37:08.913576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:37:08.913588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:37:08.913599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:37:08.913609 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:08.913620 | orchestrator | 2026-02-19 03:37:08.913631 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 03:37:08.913642 | orchestrator | Thursday 19 February 2026 03:37:08 +0000 (0:00:00.472) 0:00:35.439 ***** 2026-02-19 03:37:08.913652 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:08.913663 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:08.913673 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:08.913684 | orchestrator | 2026-02-19 03:37:08.913695 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 03:37:08.913718 | orchestrator | Thursday 19 February 2026 03:37:08 +0000 (0:00:00.379) 0:00:35.819 ***** 2026-02-19 03:37:29.588052 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 03:37:29.588160 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 03:37:29.588176 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 03:37:29.588188 | orchestrator | 2026-02-19 03:37:29.588201 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 03:37:29.588213 | orchestrator | Thursday 19 February 2026 03:37:09 +0000 (0:00:01.037) 0:00:36.857 ***** 2026-02-19 03:37:29.588224 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:37:29.588235 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:37:29.588246 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:37:29.588259 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 03:37:29.588278 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 03:37:29.588298 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 03:37:29.588317 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 03:37:29.588337 | orchestrator | 2026-02-19 03:37:29.588358 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 03:37:29.588407 | orchestrator | Thursday 19 February 2026 03:37:10 +0000 (0:00:00.911) 0:00:37.768 ***** 2026-02-19 03:37:29.588428 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:37:29.588446 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:37:29.588464 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:37:29.588481 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 03:37:29.588499 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 03:37:29.588517 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 03:37:29.588534 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 03:37:29.588552 | orchestrator | 2026-02-19 03:37:29.588571 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 03:37:29.588622 | orchestrator | Thursday 19 February 2026 03:37:12 +0000 (0:00:01.966) 0:00:39.735 ***** 2026-02-19 03:37:29.588645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:37:29.588667 | orchestrator | 2026-02-19 03:37:29.588684 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 03:37:29.588701 | orchestrator | Thursday 19 February 2026 03:37:14 +0000 (0:00:01.202) 0:00:40.938 ***** 2026-02-19 03:37:29.588718 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:37:29.588736 | orchestrator | 2026-02-19 03:37:29.588752 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 03:37:29.588769 | orchestrator | Thursday 19 February 2026 03:37:15 +0000 (0:00:01.232) 0:00:42.170 ***** 2026-02-19 03:37:29.588786 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:29.588803 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:29.588820 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:29.588836 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:37:29.588853 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:37:29.588870 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:37:29.588886 | orchestrator | 2026-02-19 03:37:29.588903 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 03:37:29.588919 | orchestrator | Thursday 19 February 2026 03:37:16 +0000 (0:00:01.255) 0:00:43.425 ***** 2026-02-19 03:37:29.588936 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.588954 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:29.588970 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.588987 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:29.589004 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.589020 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:29.589037 | orchestrator | 2026-02-19 03:37:29.589053 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 03:37:29.589070 | orchestrator | Thursday 19 February 2026 03:37:17 +0000 (0:00:00.758) 0:00:44.184 ***** 2026-02-19 03:37:29.589088 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:29.589104 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:29.589120 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.589137 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:29.589153 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.589170 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.589186 | orchestrator | 2026-02-19 03:37:29.589222 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 03:37:29.589240 | orchestrator | Thursday 19 February 2026 03:37:18 +0000 (0:00:00.847) 0:00:45.032 ***** 2026-02-19 03:37:29.589258 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.589275 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.589293 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:29.589310 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.589327 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:29.589344 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:29.589360 | orchestrator | 2026-02-19 03:37:29.589466 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 03:37:29.589491 | orchestrator | Thursday 19 February 2026 03:37:18 +0000 (0:00:00.739) 0:00:45.771 ***** 2026-02-19 03:37:29.589507 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:29.589526 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:29.589568 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:29.589586 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:37:29.589603 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:37:29.589620 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:37:29.589637 | orchestrator | 2026-02-19 03:37:29.589654 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 03:37:29.589685 | orchestrator | Thursday 19 February 2026 03:37:20 +0000 (0:00:01.252) 0:00:47.024 ***** 2026-02-19 03:37:29.589704 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:29.589721 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:29.589740 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:29.589756 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.589775 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.589793 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.589812 | orchestrator | 2026-02-19 03:37:29.589832 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 03:37:29.589867 | orchestrator | Thursday 19 February 2026 03:37:20 +0000 (0:00:00.680) 0:00:47.704 ***** 2026-02-19 03:37:29.589885 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:29.589901 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:29.589917 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:29.589935 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.589953 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.589970 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.589986 | orchestrator | 2026-02-19 03:37:29.590003 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 03:37:29.590149 | orchestrator | Thursday 19 February 2026 03:37:21 +0000 (0:00:00.897) 0:00:48.602 ***** 2026-02-19 03:37:29.590168 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:29.590184 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:29.590201 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:29.590217 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:37:29.590234 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:37:29.590251 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:37:29.590269 | orchestrator | 2026-02-19 03:37:29.590286 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 03:37:29.590304 | orchestrator | Thursday 19 February 2026 03:37:22 +0000 (0:00:01.153) 0:00:49.755 ***** 2026-02-19 03:37:29.590322 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:29.590338 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:29.590355 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:29.590371 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:37:29.590463 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:37:29.590483 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:37:29.590499 | orchestrator | 2026-02-19 03:37:29.590515 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 03:37:29.590531 | orchestrator | Thursday 19 February 2026 03:37:24 +0000 (0:00:01.467) 0:00:51.222 ***** 2026-02-19 03:37:29.590548 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:29.590564 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:29.590581 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:29.590598 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.590615 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.590632 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.590648 | orchestrator | 2026-02-19 03:37:29.590666 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 03:37:29.590683 | orchestrator | Thursday 19 February 2026 03:37:24 +0000 (0:00:00.694) 0:00:51.917 ***** 2026-02-19 03:37:29.590699 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:29.590717 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:29.590733 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:29.590749 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:37:29.590766 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:37:29.590782 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:37:29.590799 | orchestrator | 2026-02-19 03:37:29.590816 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 03:37:29.590832 | orchestrator | Thursday 19 February 2026 03:37:26 +0000 (0:00:01.035) 0:00:52.953 ***** 2026-02-19 03:37:29.590849 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:29.590866 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:29.590900 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:29.590916 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.590933 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.590950 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.590983 | orchestrator | 2026-02-19 03:37:29.591012 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 03:37:29.591026 | orchestrator | Thursday 19 February 2026 03:37:26 +0000 (0:00:00.715) 0:00:53.668 ***** 2026-02-19 03:37:29.591041 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:29.591055 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:29.591070 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:29.591085 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.591099 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.591114 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.591129 | orchestrator | 2026-02-19 03:37:29.591144 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 03:37:29.591159 | orchestrator | Thursday 19 February 2026 03:37:27 +0000 (0:00:01.001) 0:00:54.669 ***** 2026-02-19 03:37:29.591173 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:37:29.591188 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:37:29.591203 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:37:29.591218 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.591232 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.591258 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.591274 | orchestrator | 2026-02-19 03:37:29.591290 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 03:37:29.591306 | orchestrator | Thursday 19 February 2026 03:37:28 +0000 (0:00:00.646) 0:00:55.316 ***** 2026-02-19 03:37:29.591321 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:29.591337 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:37:29.591353 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:37:29.591368 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:37:29.591408 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:37:29.591425 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:37:29.591442 | orchestrator | 2026-02-19 03:37:29.591458 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 03:37:29.591475 | orchestrator | Thursday 19 February 2026 03:37:29 +0000 (0:00:00.894) 0:00:56.211 ***** 2026-02-19 03:37:29.591492 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:37:29.591519 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:30.665999 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:30.666161 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:30.666174 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:30.666182 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:30.666190 | orchestrator | 2026-02-19 03:38:30.666199 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 03:38:30.666208 | orchestrator | Thursday 19 February 2026 03:37:29 +0000 (0:00:00.620) 0:00:56.831 ***** 2026-02-19 03:38:30.666215 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:30.666264 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:30.666274 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:30.666281 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:38:30.666290 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:38:30.666297 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:38:30.666304 | orchestrator | 2026-02-19 03:38:30.666312 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 03:38:30.666319 | orchestrator | Thursday 19 February 2026 03:37:30 +0000 (0:00:00.837) 0:00:57.668 ***** 2026-02-19 03:38:30.666327 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:38:30.666334 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:38:30.666341 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:38:30.666348 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:38:30.666355 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:38:30.666363 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:38:30.666450 | orchestrator | 2026-02-19 03:38:30.666458 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 03:38:30.666466 | orchestrator | Thursday 19 February 2026 03:37:31 +0000 (0:00:00.673) 0:00:58.342 ***** 2026-02-19 03:38:30.666473 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:38:30.666480 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:38:30.666488 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:38:30.666495 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:38:30.666502 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:38:30.666509 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:38:30.666518 | orchestrator | 2026-02-19 03:38:30.666527 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 03:38:30.666536 | orchestrator | Thursday 19 February 2026 03:37:32 +0000 (0:00:01.335) 0:00:59.677 ***** 2026-02-19 03:38:30.666544 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:38:30.666552 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:38:30.666561 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:38:30.666573 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:38:30.666592 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:38:30.666606 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:38:30.666618 | orchestrator | 2026-02-19 03:38:30.666629 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 03:38:30.666642 | orchestrator | Thursday 19 February 2026 03:37:34 +0000 (0:00:01.756) 0:01:01.434 ***** 2026-02-19 03:38:30.666654 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:38:30.666666 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:38:30.666677 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:38:30.666689 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:38:30.666701 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:38:30.666713 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:38:30.666726 | orchestrator | 2026-02-19 03:38:30.666738 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 03:38:30.666750 | orchestrator | Thursday 19 February 2026 03:37:36 +0000 (0:00:02.295) 0:01:03.729 ***** 2026-02-19 03:38:30.666766 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:38:30.666780 | orchestrator | 2026-02-19 03:38:30.666793 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 03:38:30.666806 | orchestrator | Thursday 19 February 2026 03:37:38 +0000 (0:00:01.392) 0:01:05.122 ***** 2026-02-19 03:38:30.666820 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:30.666832 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:30.666844 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:30.666853 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:30.666861 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:30.666871 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:30.666887 | orchestrator | 2026-02-19 03:38:30.666905 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 03:38:30.666917 | orchestrator | Thursday 19 February 2026 03:37:38 +0000 (0:00:00.628) 0:01:05.750 ***** 2026-02-19 03:38:30.666929 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:30.666941 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:30.666952 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:30.666962 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:30.666973 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:30.666986 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:30.666998 | orchestrator | 2026-02-19 03:38:30.667010 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 03:38:30.667022 | orchestrator | Thursday 19 February 2026 03:37:39 +0000 (0:00:00.847) 0:01:06.597 ***** 2026-02-19 03:38:30.667034 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 03:38:30.667063 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 03:38:30.667085 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 03:38:30.667093 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 03:38:30.667101 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 03:38:30.667108 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 03:38:30.667116 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 03:38:30.667124 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 03:38:30.667131 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 03:38:30.667156 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 03:38:30.667165 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 03:38:30.667172 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 03:38:30.667179 | orchestrator | 2026-02-19 03:38:30.667186 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 03:38:30.667193 | orchestrator | Thursday 19 February 2026 03:37:41 +0000 (0:00:01.346) 0:01:07.944 ***** 2026-02-19 03:38:30.667200 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:38:30.667208 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:38:30.667215 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:38:30.667222 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:38:30.667229 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:38:30.667236 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:38:30.667243 | orchestrator | 2026-02-19 03:38:30.667250 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 03:38:30.667257 | orchestrator | Thursday 19 February 2026 03:37:42 +0000 (0:00:01.284) 0:01:09.229 ***** 2026-02-19 03:38:30.667264 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:30.667271 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:30.667278 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:30.667285 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:30.667292 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:30.667299 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:30.667306 | orchestrator | 2026-02-19 03:38:30.667313 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 03:38:30.667320 | orchestrator | Thursday 19 February 2026 03:37:42 +0000 (0:00:00.664) 0:01:09.894 ***** 2026-02-19 03:38:30.667327 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:30.667334 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:30.667342 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:30.667349 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:30.667355 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:30.667362 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:30.667394 | orchestrator | 2026-02-19 03:38:30.667402 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 03:38:30.667409 | orchestrator | Thursday 19 February 2026 03:37:43 +0000 (0:00:00.815) 0:01:10.709 ***** 2026-02-19 03:38:30.667416 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:30.667423 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:30.667431 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:30.667438 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:30.667445 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:30.667452 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:30.667459 | orchestrator | 2026-02-19 03:38:30.667466 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 03:38:30.667473 | orchestrator | Thursday 19 February 2026 03:37:44 +0000 (0:00:00.623) 0:01:11.333 ***** 2026-02-19 03:38:30.667487 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:38:30.667495 | orchestrator | 2026-02-19 03:38:30.667502 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 03:38:30.667509 | orchestrator | Thursday 19 February 2026 03:37:45 +0000 (0:00:01.251) 0:01:12.584 ***** 2026-02-19 03:38:30.667516 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:38:30.667524 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:38:30.667531 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:38:30.667538 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:38:30.667545 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:38:30.667552 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:38:30.667559 | orchestrator | 2026-02-19 03:38:30.667566 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 03:38:30.667574 | orchestrator | Thursday 19 February 2026 03:38:29 +0000 (0:00:44.295) 0:01:56.880 ***** 2026-02-19 03:38:30.667581 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 03:38:30.667588 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 03:38:30.667595 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 03:38:30.667602 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:30.667610 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 03:38:30.667617 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 03:38:30.667624 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 03:38:30.667631 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:30.667638 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 03:38:30.667645 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 03:38:30.667656 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 03:38:30.667664 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:30.667671 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 03:38:30.667678 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 03:38:30.667685 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 03:38:30.667692 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:30.667699 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 03:38:30.667706 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 03:38:30.667713 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 03:38:30.667726 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.112651 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 03:38:55.112754 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 03:38:55.112769 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 03:38:55.112781 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.112792 | orchestrator | 2026-02-19 03:38:55.112805 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 03:38:55.112816 | orchestrator | Thursday 19 February 2026 03:38:30 +0000 (0:00:00.700) 0:01:57.580 ***** 2026-02-19 03:38:55.112826 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.112837 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.112848 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.112858 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.112869 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.112906 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.112917 | orchestrator | 2026-02-19 03:38:55.112928 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 03:38:55.112939 | orchestrator | Thursday 19 February 2026 03:38:31 +0000 (0:00:00.830) 0:01:58.411 ***** 2026-02-19 03:38:55.112950 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.112956 | orchestrator | 2026-02-19 03:38:55.112963 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 03:38:55.112969 | orchestrator | Thursday 19 February 2026 03:38:31 +0000 (0:00:00.136) 0:01:58.547 ***** 2026-02-19 03:38:55.112975 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.112981 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.112988 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.112994 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113000 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113006 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113012 | orchestrator | 2026-02-19 03:38:55.113018 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 03:38:55.113024 | orchestrator | Thursday 19 February 2026 03:38:32 +0000 (0:00:00.652) 0:01:59.199 ***** 2026-02-19 03:38:55.113030 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113036 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113042 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113048 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113055 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113061 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113067 | orchestrator | 2026-02-19 03:38:55.113073 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 03:38:55.113079 | orchestrator | Thursday 19 February 2026 03:38:33 +0000 (0:00:00.952) 0:02:00.152 ***** 2026-02-19 03:38:55.113085 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113091 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113097 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113104 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113110 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113116 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113122 | orchestrator | 2026-02-19 03:38:55.113128 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 03:38:55.113134 | orchestrator | Thursday 19 February 2026 03:38:33 +0000 (0:00:00.606) 0:02:00.759 ***** 2026-02-19 03:38:55.113141 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:38:55.113148 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:38:55.113154 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:38:55.113160 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:38:55.113166 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:38:55.113172 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:38:55.113177 | orchestrator | 2026-02-19 03:38:55.113184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 03:38:55.113190 | orchestrator | Thursday 19 February 2026 03:38:37 +0000 (0:00:03.853) 0:02:04.612 ***** 2026-02-19 03:38:55.113197 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:38:55.113204 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:38:55.113211 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:38:55.113218 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:38:55.113224 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:38:55.113231 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:38:55.113238 | orchestrator | 2026-02-19 03:38:55.113246 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 03:38:55.113253 | orchestrator | Thursday 19 February 2026 03:38:38 +0000 (0:00:00.587) 0:02:05.199 ***** 2026-02-19 03:38:55.113261 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:38:55.113269 | orchestrator | 2026-02-19 03:38:55.113276 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 03:38:55.113289 | orchestrator | Thursday 19 February 2026 03:38:39 +0000 (0:00:01.252) 0:02:06.451 ***** 2026-02-19 03:38:55.113296 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113304 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113311 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113317 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113337 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113344 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113351 | orchestrator | 2026-02-19 03:38:55.113358 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 03:38:55.113416 | orchestrator | Thursday 19 February 2026 03:38:40 +0000 (0:00:00.849) 0:02:07.300 ***** 2026-02-19 03:38:55.113424 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113431 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113438 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113445 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113452 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113459 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113466 | orchestrator | 2026-02-19 03:38:55.113473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 03:38:55.113480 | orchestrator | Thursday 19 February 2026 03:38:41 +0000 (0:00:00.670) 0:02:07.971 ***** 2026-02-19 03:38:55.113487 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113508 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113516 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113522 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113529 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113536 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113543 | orchestrator | 2026-02-19 03:38:55.113550 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 03:38:55.113557 | orchestrator | Thursday 19 February 2026 03:38:42 +0000 (0:00:00.980) 0:02:08.951 ***** 2026-02-19 03:38:55.113564 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113570 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113576 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113582 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113588 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113594 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113600 | orchestrator | 2026-02-19 03:38:55.113606 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 03:38:55.113613 | orchestrator | Thursday 19 February 2026 03:38:42 +0000 (0:00:00.610) 0:02:09.562 ***** 2026-02-19 03:38:55.113619 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113625 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113631 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113637 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113643 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113649 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113655 | orchestrator | 2026-02-19 03:38:55.113661 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 03:38:55.113667 | orchestrator | Thursday 19 February 2026 03:38:43 +0000 (0:00:00.905) 0:02:10.467 ***** 2026-02-19 03:38:55.113673 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113679 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113685 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113691 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113697 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113704 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113710 | orchestrator | 2026-02-19 03:38:55.113716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 03:38:55.113723 | orchestrator | Thursday 19 February 2026 03:38:44 +0000 (0:00:00.627) 0:02:11.095 ***** 2026-02-19 03:38:55.113741 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113755 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113769 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113781 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113792 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113802 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113813 | orchestrator | 2026-02-19 03:38:55.113824 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 03:38:55.113835 | orchestrator | Thursday 19 February 2026 03:38:45 +0000 (0:00:00.844) 0:02:11.940 ***** 2026-02-19 03:38:55.113845 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:38:55.113854 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:38:55.113865 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:38:55.113875 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:38:55.113884 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:38:55.113893 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:38:55.113904 | orchestrator | 2026-02-19 03:38:55.113914 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 03:38:55.113924 | orchestrator | Thursday 19 February 2026 03:38:45 +0000 (0:00:00.617) 0:02:12.557 ***** 2026-02-19 03:38:55.113935 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:38:55.113948 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:38:55.113959 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:38:55.113969 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:38:55.113980 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:38:55.113990 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:38:55.114001 | orchestrator | 2026-02-19 03:38:55.114067 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 03:38:55.114083 | orchestrator | Thursday 19 February 2026 03:38:46 +0000 (0:00:01.266) 0:02:13.824 ***** 2026-02-19 03:38:55.114095 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:38:55.114109 | orchestrator | 2026-02-19 03:38:55.114120 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 03:38:55.114134 | orchestrator | Thursday 19 February 2026 03:38:48 +0000 (0:00:01.231) 0:02:15.055 ***** 2026-02-19 03:38:55.114141 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-19 03:38:55.114147 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-19 03:38:55.114154 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-19 03:38:55.114160 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-19 03:38:55.114166 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-19 03:38:55.114172 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-19 03:38:55.114179 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-19 03:38:55.114192 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-19 03:38:55.114198 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-19 03:38:55.114205 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-19 03:38:55.114211 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-19 03:38:55.114217 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-19 03:38:55.114223 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-19 03:38:55.114229 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-19 03:38:55.114235 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-19 03:38:55.114241 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-19 03:38:55.114248 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-19 03:38:55.114262 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-19 03:39:00.790946 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-19 03:39:00.791052 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-19 03:39:00.791061 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-19 03:39:00.791068 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-19 03:39:00.791074 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-19 03:39:00.791080 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-19 03:39:00.791086 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-19 03:39:00.791092 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-19 03:39:00.791097 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-19 03:39:00.791103 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-19 03:39:00.791109 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-19 03:39:00.791114 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-19 03:39:00.791132 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-19 03:39:00.791140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-19 03:39:00.791153 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-19 03:39:00.791159 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-19 03:39:00.791164 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-19 03:39:00.791170 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-19 03:39:00.791176 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-19 03:39:00.791181 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-19 03:39:00.791187 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-19 03:39:00.791193 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-19 03:39:00.791198 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-19 03:39:00.791204 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 03:39:00.791210 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-19 03:39:00.791215 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-19 03:39:00.791221 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-19 03:39:00.791227 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-19 03:39:00.791233 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 03:39:00.791239 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 03:39:00.791244 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-19 03:39:00.791250 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-19 03:39:00.791256 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-19 03:39:00.791261 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 03:39:00.791267 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 03:39:00.791273 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 03:39:00.791278 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 03:39:00.791285 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 03:39:00.791290 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 03:39:00.791296 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 03:39:00.791302 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 03:39:00.791307 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 03:39:00.791313 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 03:39:00.791333 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 03:39:00.791346 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 03:39:00.791352 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 03:39:00.791357 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 03:39:00.791404 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 03:39:00.791410 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 03:39:00.791428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 03:39:00.791434 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 03:39:00.791440 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 03:39:00.791446 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 03:39:00.791452 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 03:39:00.791457 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 03:39:00.791463 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 03:39:00.791469 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 03:39:00.791486 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 03:39:00.791493 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-19 03:39:00.791502 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 03:39:00.791513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 03:39:00.791524 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 03:39:00.791534 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 03:39:00.791544 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-19 03:39:00.791555 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-19 03:39:00.791565 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 03:39:00.791577 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 03:39:00.791588 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 03:39:00.791599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 03:39:00.791607 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-19 03:39:00.791615 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-19 03:39:00.791621 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-19 03:39:00.791628 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-19 03:39:00.791635 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-19 03:39:00.791641 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-19 03:39:00.791648 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-19 03:39:00.791654 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-19 03:39:00.791661 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-19 03:39:00.791668 | orchestrator | 2026-02-19 03:39:00.791675 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 03:39:00.791681 | orchestrator | Thursday 19 February 2026 03:38:55 +0000 (0:00:06.956) 0:02:22.011 ***** 2026-02-19 03:39:00.791688 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:00.791695 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:00.791701 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:00.791707 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:39:00.791720 | orchestrator | 2026-02-19 03:39:00.791726 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 03:39:00.791731 | orchestrator | Thursday 19 February 2026 03:38:56 +0000 (0:00:01.296) 0:02:23.308 ***** 2026-02-19 03:39:00.791737 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:00.791744 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:00.791749 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:00.791755 | orchestrator | 2026-02-19 03:39:00.791761 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 03:39:00.791767 | orchestrator | Thursday 19 February 2026 03:38:57 +0000 (0:00:00.753) 0:02:24.061 ***** 2026-02-19 03:39:00.791772 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:00.791778 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:00.791784 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:00.791789 | orchestrator | 2026-02-19 03:39:00.791795 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 03:39:00.791801 | orchestrator | Thursday 19 February 2026 03:38:58 +0000 (0:00:01.272) 0:02:25.334 ***** 2026-02-19 03:39:00.791806 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:00.791812 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:00.791818 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:00.791823 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:00.791829 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:00.791835 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:00.791840 | orchestrator | 2026-02-19 03:39:00.791846 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 03:39:00.791856 | orchestrator | Thursday 19 February 2026 03:38:59 +0000 (0:00:00.902) 0:02:26.236 ***** 2026-02-19 03:39:00.791862 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:00.791867 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:00.791873 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:00.791879 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:00.791884 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:00.791890 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:00.791895 | orchestrator | 2026-02-19 03:39:00.791901 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 03:39:00.791907 | orchestrator | Thursday 19 February 2026 03:38:59 +0000 (0:00:00.613) 0:02:26.850 ***** 2026-02-19 03:39:00.791912 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:00.791918 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:00.791924 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:00.791929 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:00.791935 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:00.791941 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:00.791946 | orchestrator | 2026-02-19 03:39:00.791956 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 03:39:15.401201 | orchestrator | Thursday 19 February 2026 03:39:00 +0000 (0:00:00.848) 0:02:27.698 ***** 2026-02-19 03:39:15.401297 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.401308 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.401315 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.401321 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401326 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401333 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401355 | orchestrator | 2026-02-19 03:39:15.401403 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 03:39:15.401409 | orchestrator | Thursday 19 February 2026 03:39:01 +0000 (0:00:00.619) 0:02:28.318 ***** 2026-02-19 03:39:15.401415 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.401421 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.401427 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.401433 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401438 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401454 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401473 | orchestrator | 2026-02-19 03:39:15.401485 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 03:39:15.401493 | orchestrator | Thursday 19 February 2026 03:39:02 +0000 (0:00:00.829) 0:02:29.147 ***** 2026-02-19 03:39:15.401498 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.401504 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.401510 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.401516 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401521 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401527 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401533 | orchestrator | 2026-02-19 03:39:15.401538 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 03:39:15.401544 | orchestrator | Thursday 19 February 2026 03:39:02 +0000 (0:00:00.675) 0:02:29.823 ***** 2026-02-19 03:39:15.401550 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.401556 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.401562 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.401567 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401573 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401579 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401584 | orchestrator | 2026-02-19 03:39:15.401590 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 03:39:15.401596 | orchestrator | Thursday 19 February 2026 03:39:03 +0000 (0:00:00.849) 0:02:30.672 ***** 2026-02-19 03:39:15.401602 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.401607 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.401613 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.401619 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401624 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401630 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401635 | orchestrator | 2026-02-19 03:39:15.401641 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 03:39:15.401647 | orchestrator | Thursday 19 February 2026 03:39:04 +0000 (0:00:00.641) 0:02:31.313 ***** 2026-02-19 03:39:15.401653 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401659 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401664 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401670 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:15.401677 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:15.401682 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:15.401688 | orchestrator | 2026-02-19 03:39:15.401694 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 03:39:15.401699 | orchestrator | Thursday 19 February 2026 03:39:08 +0000 (0:00:04.029) 0:02:35.343 ***** 2026-02-19 03:39:15.401705 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:15.401711 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:15.401717 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:15.401722 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401728 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401734 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401740 | orchestrator | 2026-02-19 03:39:15.401746 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 03:39:15.401758 | orchestrator | Thursday 19 February 2026 03:39:09 +0000 (0:00:00.654) 0:02:35.997 ***** 2026-02-19 03:39:15.401765 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:15.401771 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:15.401778 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:15.401784 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401791 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401797 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401803 | orchestrator | 2026-02-19 03:39:15.401810 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 03:39:15.401817 | orchestrator | Thursday 19 February 2026 03:39:10 +0000 (0:00:01.006) 0:02:37.004 ***** 2026-02-19 03:39:15.401823 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.401830 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.401848 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.401854 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401860 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401867 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401874 | orchestrator | 2026-02-19 03:39:15.401880 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 03:39:15.401887 | orchestrator | Thursday 19 February 2026 03:39:10 +0000 (0:00:00.614) 0:02:37.618 ***** 2026-02-19 03:39:15.401894 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:15.401902 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:15.401909 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 03:39:15.401920 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.401945 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.401953 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.401960 | orchestrator | 2026-02-19 03:39:15.401966 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 03:39:15.401973 | orchestrator | Thursday 19 February 2026 03:39:11 +0000 (0:00:00.871) 0:02:38.489 ***** 2026-02-19 03:39:15.401986 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-19 03:39:15.401999 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-19 03:39:15.402010 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-19 03:39:15.402071 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.402079 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-19 03:39:15.402087 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-19 03:39:15.402100 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-19 03:39:15.402107 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.402112 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.402118 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.402124 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.402129 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.402135 | orchestrator | 2026-02-19 03:39:15.402141 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 03:39:15.402147 | orchestrator | Thursday 19 February 2026 03:39:12 +0000 (0:00:00.676) 0:02:39.166 ***** 2026-02-19 03:39:15.402152 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.402158 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.402164 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.402169 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.402175 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.402180 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.402186 | orchestrator | 2026-02-19 03:39:15.402192 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 03:39:15.402197 | orchestrator | Thursday 19 February 2026 03:39:13 +0000 (0:00:00.859) 0:02:40.025 ***** 2026-02-19 03:39:15.402203 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.402210 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.402219 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.402230 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.402240 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.402246 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.402252 | orchestrator | 2026-02-19 03:39:15.402258 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 03:39:15.402264 | orchestrator | Thursday 19 February 2026 03:39:13 +0000 (0:00:00.571) 0:02:40.597 ***** 2026-02-19 03:39:15.402274 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.402280 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.402285 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.402291 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.402297 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.402302 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.402308 | orchestrator | 2026-02-19 03:39:15.402314 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 03:39:15.402320 | orchestrator | Thursday 19 February 2026 03:39:14 +0000 (0:00:00.893) 0:02:41.491 ***** 2026-02-19 03:39:15.402325 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:15.402331 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:15.402337 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:15.402343 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:15.402348 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:15.402354 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:15.402381 | orchestrator | 2026-02-19 03:39:15.402388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 03:39:15.402399 | orchestrator | Thursday 19 February 2026 03:39:15 +0000 (0:00:00.816) 0:02:42.308 ***** 2026-02-19 03:39:33.176015 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.176103 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:33.176116 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:33.176128 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:33.176139 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:33.176150 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:33.176184 | orchestrator | 2026-02-19 03:39:33.176193 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 03:39:33.176200 | orchestrator | Thursday 19 February 2026 03:39:16 +0000 (0:00:00.661) 0:02:42.969 ***** 2026-02-19 03:39:33.176206 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:33.176213 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:33.176219 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:33.176226 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:33.176232 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:33.176238 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:33.176244 | orchestrator | 2026-02-19 03:39:33.176250 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 03:39:33.176256 | orchestrator | Thursday 19 February 2026 03:39:16 +0000 (0:00:00.847) 0:02:43.817 ***** 2026-02-19 03:39:33.176263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:39:33.176269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:39:33.176278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:39:33.176289 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.176299 | orchestrator | 2026-02-19 03:39:33.176310 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 03:39:33.176320 | orchestrator | Thursday 19 February 2026 03:39:17 +0000 (0:00:00.445) 0:02:44.262 ***** 2026-02-19 03:39:33.176331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:39:33.176338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:39:33.176344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:39:33.176353 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.176408 | orchestrator | 2026-02-19 03:39:33.176419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 03:39:33.176429 | orchestrator | Thursday 19 February 2026 03:39:17 +0000 (0:00:00.415) 0:02:44.677 ***** 2026-02-19 03:39:33.176436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:39:33.176442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:39:33.176448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:39:33.176454 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.176460 | orchestrator | 2026-02-19 03:39:33.176467 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 03:39:33.176473 | orchestrator | Thursday 19 February 2026 03:39:18 +0000 (0:00:00.421) 0:02:45.099 ***** 2026-02-19 03:39:33.176479 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:33.176485 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:33.176491 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:33.176497 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:33.176503 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:33.176509 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:33.176515 | orchestrator | 2026-02-19 03:39:33.176521 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 03:39:33.176537 | orchestrator | Thursday 19 February 2026 03:39:18 +0000 (0:00:00.615) 0:02:45.714 ***** 2026-02-19 03:39:33.176546 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 03:39:33.176564 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 03:39:33.176575 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 03:39:33.176586 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-19 03:39:33.176596 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:33.176606 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-19 03:39:33.176616 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:33.176622 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-19 03:39:33.176628 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:33.176634 | orchestrator | 2026-02-19 03:39:33.176640 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 03:39:33.176653 | orchestrator | Thursday 19 February 2026 03:39:20 +0000 (0:00:01.739) 0:02:47.453 ***** 2026-02-19 03:39:33.176660 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:39:33.176670 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:39:33.176680 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:39:33.176692 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:39:33.176702 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:39:33.176713 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:39:33.176723 | orchestrator | 2026-02-19 03:39:33.176734 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 03:39:33.176745 | orchestrator | Thursday 19 February 2026 03:39:23 +0000 (0:00:02.609) 0:02:50.063 ***** 2026-02-19 03:39:33.176755 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:39:33.176778 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:39:33.176785 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:39:33.176791 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:39:33.176797 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:39:33.176803 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:39:33.176809 | orchestrator | 2026-02-19 03:39:33.176816 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-19 03:39:33.176822 | orchestrator | Thursday 19 February 2026 03:39:24 +0000 (0:00:01.009) 0:02:51.072 ***** 2026-02-19 03:39:33.176828 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.176834 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:33.176840 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:33.176847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:39:33.176853 | orchestrator | 2026-02-19 03:39:33.176859 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-19 03:39:33.176865 | orchestrator | Thursday 19 February 2026 03:39:25 +0000 (0:00:01.163) 0:02:52.235 ***** 2026-02-19 03:39:33.176871 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:39:33.176891 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:39:33.176898 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:39:33.176904 | orchestrator | 2026-02-19 03:39:33.176911 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-19 03:39:33.176917 | orchestrator | Thursday 19 February 2026 03:39:25 +0000 (0:00:00.343) 0:02:52.579 ***** 2026-02-19 03:39:33.176923 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:39:33.176929 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:39:33.176935 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:39:33.176943 | orchestrator | 2026-02-19 03:39:33.176953 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-19 03:39:33.176962 | orchestrator | Thursday 19 February 2026 03:39:27 +0000 (0:00:01.582) 0:02:54.162 ***** 2026-02-19 03:39:33.176971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 03:39:33.176981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 03:39:33.176991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 03:39:33.177000 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:33.177010 | orchestrator | 2026-02-19 03:39:33.177018 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-19 03:39:33.177027 | orchestrator | Thursday 19 February 2026 03:39:27 +0000 (0:00:00.691) 0:02:54.854 ***** 2026-02-19 03:39:33.177036 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:39:33.177044 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:39:33.177053 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:39:33.177061 | orchestrator | 2026-02-19 03:39:33.177071 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-19 03:39:33.177081 | orchestrator | Thursday 19 February 2026 03:39:28 +0000 (0:00:00.391) 0:02:55.246 ***** 2026-02-19 03:39:33.177091 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:33.177100 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:33.177109 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:33.177126 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:39:33.177135 | orchestrator | 2026-02-19 03:39:33.177144 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-19 03:39:33.177153 | orchestrator | Thursday 19 February 2026 03:39:29 +0000 (0:00:01.239) 0:02:56.485 ***** 2026-02-19 03:39:33.177164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:39:33.177174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:39:33.177184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:39:33.177194 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177205 | orchestrator | 2026-02-19 03:39:33.177215 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-19 03:39:33.177227 | orchestrator | Thursday 19 February 2026 03:39:29 +0000 (0:00:00.427) 0:02:56.913 ***** 2026-02-19 03:39:33.177234 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177240 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:33.177246 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:33.177253 | orchestrator | 2026-02-19 03:39:33.177259 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-19 03:39:33.177265 | orchestrator | Thursday 19 February 2026 03:39:30 +0000 (0:00:00.327) 0:02:57.240 ***** 2026-02-19 03:39:33.177271 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177278 | orchestrator | 2026-02-19 03:39:33.177284 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-19 03:39:33.177290 | orchestrator | Thursday 19 February 2026 03:39:30 +0000 (0:00:00.231) 0:02:57.472 ***** 2026-02-19 03:39:33.177296 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177302 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:33.177309 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:33.177315 | orchestrator | 2026-02-19 03:39:33.177321 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-19 03:39:33.177327 | orchestrator | Thursday 19 February 2026 03:39:30 +0000 (0:00:00.317) 0:02:57.789 ***** 2026-02-19 03:39:33.177334 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177340 | orchestrator | 2026-02-19 03:39:33.177346 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-19 03:39:33.177352 | orchestrator | Thursday 19 February 2026 03:39:31 +0000 (0:00:00.707) 0:02:58.496 ***** 2026-02-19 03:39:33.177391 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177399 | orchestrator | 2026-02-19 03:39:33.177405 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-19 03:39:33.177411 | orchestrator | Thursday 19 February 2026 03:39:31 +0000 (0:00:00.237) 0:02:58.734 ***** 2026-02-19 03:39:33.177417 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177423 | orchestrator | 2026-02-19 03:39:33.177430 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-19 03:39:33.177436 | orchestrator | Thursday 19 February 2026 03:39:31 +0000 (0:00:00.140) 0:02:58.875 ***** 2026-02-19 03:39:33.177448 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177465 | orchestrator | 2026-02-19 03:39:33.177479 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-19 03:39:33.177485 | orchestrator | Thursday 19 February 2026 03:39:32 +0000 (0:00:00.274) 0:02:59.149 ***** 2026-02-19 03:39:33.177492 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177498 | orchestrator | 2026-02-19 03:39:33.177504 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-19 03:39:33.177510 | orchestrator | Thursday 19 February 2026 03:39:32 +0000 (0:00:00.243) 0:02:59.393 ***** 2026-02-19 03:39:33.177516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:39:33.177523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:39:33.177529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:39:33.177543 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:33.177550 | orchestrator | 2026-02-19 03:39:33.177556 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-19 03:39:33.177562 | orchestrator | Thursday 19 February 2026 03:39:32 +0000 (0:00:00.502) 0:02:59.895 ***** 2026-02-19 03:39:33.177576 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:52.838331 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:52.838463 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:52.838471 | orchestrator | 2026-02-19 03:39:52.838477 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-19 03:39:52.838483 | orchestrator | Thursday 19 February 2026 03:39:33 +0000 (0:00:00.333) 0:03:00.228 ***** 2026-02-19 03:39:52.838487 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:52.838491 | orchestrator | 2026-02-19 03:39:52.838495 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-19 03:39:52.838499 | orchestrator | Thursday 19 February 2026 03:39:33 +0000 (0:00:00.262) 0:03:00.491 ***** 2026-02-19 03:39:52.838503 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:52.838507 | orchestrator | 2026-02-19 03:39:52.838511 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-19 03:39:52.838515 | orchestrator | Thursday 19 February 2026 03:39:33 +0000 (0:00:00.229) 0:03:00.721 ***** 2026-02-19 03:39:52.838520 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.838524 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:52.838527 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:52.838532 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:39:52.838536 | orchestrator | 2026-02-19 03:39:52.838540 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-19 03:39:52.838544 | orchestrator | Thursday 19 February 2026 03:39:34 +0000 (0:00:01.165) 0:03:01.886 ***** 2026-02-19 03:39:52.838548 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:52.838554 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:52.838558 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:52.838562 | orchestrator | 2026-02-19 03:39:52.838566 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-19 03:39:52.838570 | orchestrator | Thursday 19 February 2026 03:39:35 +0000 (0:00:00.349) 0:03:02.235 ***** 2026-02-19 03:39:52.838574 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:39:52.838578 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:39:52.838582 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:39:52.838586 | orchestrator | 2026-02-19 03:39:52.838590 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-19 03:39:52.838594 | orchestrator | Thursday 19 February 2026 03:39:36 +0000 (0:00:01.649) 0:03:03.884 ***** 2026-02-19 03:39:52.838598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:39:52.838603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:39:52.838607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:39:52.838611 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:52.838615 | orchestrator | 2026-02-19 03:39:52.838619 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-19 03:39:52.838623 | orchestrator | Thursday 19 February 2026 03:39:37 +0000 (0:00:00.673) 0:03:04.559 ***** 2026-02-19 03:39:52.838627 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:52.838631 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:52.838635 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:52.838639 | orchestrator | 2026-02-19 03:39:52.838643 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-19 03:39:52.838647 | orchestrator | Thursday 19 February 2026 03:39:37 +0000 (0:00:00.363) 0:03:04.922 ***** 2026-02-19 03:39:52.838651 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.838655 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:52.838659 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:52.838681 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:39:52.838685 | orchestrator | 2026-02-19 03:39:52.838689 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-19 03:39:52.838693 | orchestrator | Thursday 19 February 2026 03:39:39 +0000 (0:00:01.183) 0:03:06.105 ***** 2026-02-19 03:39:52.838697 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:52.838701 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:52.838705 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:52.838708 | orchestrator | 2026-02-19 03:39:52.838712 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-19 03:39:52.838716 | orchestrator | Thursday 19 February 2026 03:39:39 +0000 (0:00:00.379) 0:03:06.485 ***** 2026-02-19 03:39:52.838720 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:39:52.838724 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:39:52.838728 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:39:52.838732 | orchestrator | 2026-02-19 03:39:52.838736 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-19 03:39:52.838740 | orchestrator | Thursday 19 February 2026 03:39:40 +0000 (0:00:01.317) 0:03:07.802 ***** 2026-02-19 03:39:52.838744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:39:52.838748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:39:52.838763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:39:52.838767 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:52.838771 | orchestrator | 2026-02-19 03:39:52.838775 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-19 03:39:52.838779 | orchestrator | Thursday 19 February 2026 03:39:41 +0000 (0:00:00.835) 0:03:08.638 ***** 2026-02-19 03:39:52.838783 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:39:52.838787 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:39:52.838791 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:39:52.838795 | orchestrator | 2026-02-19 03:39:52.838799 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-19 03:39:52.838803 | orchestrator | Thursday 19 February 2026 03:39:42 +0000 (0:00:00.635) 0:03:09.274 ***** 2026-02-19 03:39:52.838807 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:52.838811 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:52.838815 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:52.838819 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.838823 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:52.838827 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:52.838831 | orchestrator | 2026-02-19 03:39:52.838845 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-19 03:39:52.838850 | orchestrator | Thursday 19 February 2026 03:39:42 +0000 (0:00:00.625) 0:03:09.899 ***** 2026-02-19 03:39:52.838854 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:39:52.838858 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:39:52.838870 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:39:52.838874 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:39:52.838878 | orchestrator | 2026-02-19 03:39:52.838882 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-19 03:39:52.838892 | orchestrator | Thursday 19 February 2026 03:39:44 +0000 (0:00:01.102) 0:03:11.001 ***** 2026-02-19 03:39:52.838896 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:39:52.838900 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:39:52.838903 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:39:52.838907 | orchestrator | 2026-02-19 03:39:52.838911 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-19 03:39:52.838915 | orchestrator | Thursday 19 February 2026 03:39:44 +0000 (0:00:00.354) 0:03:11.356 ***** 2026-02-19 03:39:52.838919 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:39:52.838927 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:39:52.838931 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:39:52.838935 | orchestrator | 2026-02-19 03:39:52.838939 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-19 03:39:52.838943 | orchestrator | Thursday 19 February 2026 03:39:45 +0000 (0:00:01.313) 0:03:12.670 ***** 2026-02-19 03:39:52.838946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 03:39:52.838950 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 03:39:52.838954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 03:39:52.838958 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.838962 | orchestrator | 2026-02-19 03:39:52.838966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-19 03:39:52.838970 | orchestrator | Thursday 19 February 2026 03:39:47 +0000 (0:00:01.315) 0:03:13.985 ***** 2026-02-19 03:39:52.838974 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:39:52.838978 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:39:52.838982 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:39:52.838985 | orchestrator | 2026-02-19 03:39:52.838989 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-19 03:39:52.838993 | orchestrator | 2026-02-19 03:39:52.838997 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 03:39:52.839001 | orchestrator | Thursday 19 February 2026 03:39:47 +0000 (0:00:00.644) 0:03:14.629 ***** 2026-02-19 03:39:52.839006 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:39:52.839011 | orchestrator | 2026-02-19 03:39:52.839015 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 03:39:52.839018 | orchestrator | Thursday 19 February 2026 03:39:48 +0000 (0:00:00.755) 0:03:15.385 ***** 2026-02-19 03:39:52.839022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:39:52.839026 | orchestrator | 2026-02-19 03:39:52.839030 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 03:39:52.839034 | orchestrator | Thursday 19 February 2026 03:39:49 +0000 (0:00:00.559) 0:03:15.944 ***** 2026-02-19 03:39:52.839038 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:39:52.839042 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:39:52.839046 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:39:52.839049 | orchestrator | 2026-02-19 03:39:52.839053 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 03:39:52.839057 | orchestrator | Thursday 19 February 2026 03:39:49 +0000 (0:00:00.756) 0:03:16.701 ***** 2026-02-19 03:39:52.839062 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.839068 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:52.839075 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:52.839083 | orchestrator | 2026-02-19 03:39:52.839092 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 03:39:52.839099 | orchestrator | Thursday 19 February 2026 03:39:50 +0000 (0:00:00.560) 0:03:17.262 ***** 2026-02-19 03:39:52.839105 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.839111 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:52.839117 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:52.839123 | orchestrator | 2026-02-19 03:39:52.839129 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 03:39:52.839136 | orchestrator | Thursday 19 February 2026 03:39:50 +0000 (0:00:00.353) 0:03:17.616 ***** 2026-02-19 03:39:52.839142 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.839149 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:52.839155 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:52.839162 | orchestrator | 2026-02-19 03:39:52.839173 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 03:39:52.839180 | orchestrator | Thursday 19 February 2026 03:39:51 +0000 (0:00:00.325) 0:03:17.941 ***** 2026-02-19 03:39:52.839193 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:39:52.839199 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:39:52.839206 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:39:52.839212 | orchestrator | 2026-02-19 03:39:52.839216 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 03:39:52.839220 | orchestrator | Thursday 19 February 2026 03:39:51 +0000 (0:00:00.743) 0:03:18.685 ***** 2026-02-19 03:39:52.839224 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.839228 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:52.839232 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:39:52.839236 | orchestrator | 2026-02-19 03:39:52.839240 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 03:39:52.839244 | orchestrator | Thursday 19 February 2026 03:39:52 +0000 (0:00:00.712) 0:03:19.397 ***** 2026-02-19 03:39:52.839248 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:39:52.839252 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:39:52.839261 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:40:15.520048 | orchestrator | 2026-02-19 03:40:15.520145 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 03:40:15.520158 | orchestrator | Thursday 19 February 2026 03:39:52 +0000 (0:00:00.353) 0:03:19.751 ***** 2026-02-19 03:40:15.520167 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.520176 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.520184 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.520192 | orchestrator | 2026-02-19 03:40:15.520200 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 03:40:15.520208 | orchestrator | Thursday 19 February 2026 03:39:53 +0000 (0:00:00.757) 0:03:20.509 ***** 2026-02-19 03:40:15.520216 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.520224 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.520231 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.520239 | orchestrator | 2026-02-19 03:40:15.520247 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 03:40:15.520255 | orchestrator | Thursday 19 February 2026 03:39:54 +0000 (0:00:00.760) 0:03:21.270 ***** 2026-02-19 03:40:15.520263 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:40:15.520271 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:40:15.520279 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:40:15.520287 | orchestrator | 2026-02-19 03:40:15.520295 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 03:40:15.520303 | orchestrator | Thursday 19 February 2026 03:39:54 +0000 (0:00:00.552) 0:03:21.822 ***** 2026-02-19 03:40:15.520311 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.520319 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.520327 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.520335 | orchestrator | 2026-02-19 03:40:15.520343 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 03:40:15.520425 | orchestrator | Thursday 19 February 2026 03:39:55 +0000 (0:00:00.357) 0:03:22.179 ***** 2026-02-19 03:40:15.520438 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:40:15.520446 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:40:15.520454 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:40:15.520461 | orchestrator | 2026-02-19 03:40:15.520469 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 03:40:15.520477 | orchestrator | Thursday 19 February 2026 03:39:55 +0000 (0:00:00.311) 0:03:22.490 ***** 2026-02-19 03:40:15.520485 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:40:15.520492 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:40:15.520500 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:40:15.520508 | orchestrator | 2026-02-19 03:40:15.520516 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 03:40:15.520524 | orchestrator | Thursday 19 February 2026 03:39:55 +0000 (0:00:00.333) 0:03:22.824 ***** 2026-02-19 03:40:15.520531 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:40:15.520560 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:40:15.520568 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:40:15.520576 | orchestrator | 2026-02-19 03:40:15.520586 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 03:40:15.520594 | orchestrator | Thursday 19 February 2026 03:39:56 +0000 (0:00:00.666) 0:03:23.491 ***** 2026-02-19 03:40:15.520603 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:40:15.520612 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:40:15.520620 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:40:15.520629 | orchestrator | 2026-02-19 03:40:15.520638 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 03:40:15.520647 | orchestrator | Thursday 19 February 2026 03:39:56 +0000 (0:00:00.326) 0:03:23.818 ***** 2026-02-19 03:40:15.520656 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:40:15.520665 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:40:15.520673 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:40:15.520680 | orchestrator | 2026-02-19 03:40:15.520688 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 03:40:15.520696 | orchestrator | Thursday 19 February 2026 03:39:57 +0000 (0:00:00.325) 0:03:24.143 ***** 2026-02-19 03:40:15.520704 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.520711 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.520719 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.520727 | orchestrator | 2026-02-19 03:40:15.520734 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 03:40:15.520742 | orchestrator | Thursday 19 February 2026 03:39:57 +0000 (0:00:00.359) 0:03:24.503 ***** 2026-02-19 03:40:15.520750 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.520758 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.520765 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.520773 | orchestrator | 2026-02-19 03:40:15.520780 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 03:40:15.520788 | orchestrator | Thursday 19 February 2026 03:39:58 +0000 (0:00:00.655) 0:03:25.159 ***** 2026-02-19 03:40:15.520796 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.520803 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.520811 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.520819 | orchestrator | 2026-02-19 03:40:15.520840 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-19 03:40:15.520849 | orchestrator | Thursday 19 February 2026 03:39:58 +0000 (0:00:00.604) 0:03:25.764 ***** 2026-02-19 03:40:15.520857 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.520864 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.520872 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.520880 | orchestrator | 2026-02-19 03:40:15.520887 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-19 03:40:15.520895 | orchestrator | Thursday 19 February 2026 03:39:59 +0000 (0:00:00.343) 0:03:26.107 ***** 2026-02-19 03:40:15.520904 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:40:15.520912 | orchestrator | 2026-02-19 03:40:15.520920 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-19 03:40:15.520928 | orchestrator | Thursday 19 February 2026 03:40:00 +0000 (0:00:00.999) 0:03:27.106 ***** 2026-02-19 03:40:15.520935 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:40:15.520943 | orchestrator | 2026-02-19 03:40:15.520951 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-19 03:40:15.520974 | orchestrator | Thursday 19 February 2026 03:40:00 +0000 (0:00:00.194) 0:03:27.302 ***** 2026-02-19 03:40:15.520983 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-19 03:40:15.520990 | orchestrator | 2026-02-19 03:40:15.520998 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-19 03:40:15.521006 | orchestrator | Thursday 19 February 2026 03:40:01 +0000 (0:00:01.094) 0:03:28.396 ***** 2026-02-19 03:40:15.521020 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.521028 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.521036 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.521044 | orchestrator | 2026-02-19 03:40:15.521051 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-19 03:40:15.521059 | orchestrator | Thursday 19 February 2026 03:40:01 +0000 (0:00:00.369) 0:03:28.766 ***** 2026-02-19 03:40:15.521067 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.521075 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.521082 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.521090 | orchestrator | 2026-02-19 03:40:15.521098 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-19 03:40:15.521106 | orchestrator | Thursday 19 February 2026 03:40:02 +0000 (0:00:00.694) 0:03:29.461 ***** 2026-02-19 03:40:15.521113 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:40:15.521121 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:40:15.521129 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:40:15.521137 | orchestrator | 2026-02-19 03:40:15.521145 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-19 03:40:15.521153 | orchestrator | Thursday 19 February 2026 03:40:03 +0000 (0:00:01.263) 0:03:30.724 ***** 2026-02-19 03:40:15.521160 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:40:15.521168 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:40:15.521176 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:40:15.521184 | orchestrator | 2026-02-19 03:40:15.521192 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-19 03:40:15.521199 | orchestrator | Thursday 19 February 2026 03:40:04 +0000 (0:00:00.897) 0:03:31.622 ***** 2026-02-19 03:40:15.521207 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:40:15.521215 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:40:15.521222 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:40:15.521230 | orchestrator | 2026-02-19 03:40:15.521237 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-19 03:40:15.521245 | orchestrator | Thursday 19 February 2026 03:40:05 +0000 (0:00:00.735) 0:03:32.357 ***** 2026-02-19 03:40:15.521253 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.521261 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.521268 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.521276 | orchestrator | 2026-02-19 03:40:15.521284 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-19 03:40:15.521291 | orchestrator | Thursday 19 February 2026 03:40:06 +0000 (0:00:01.041) 0:03:33.399 ***** 2026-02-19 03:40:15.521299 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:40:15.521307 | orchestrator | 2026-02-19 03:40:15.521314 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-19 03:40:15.521322 | orchestrator | Thursday 19 February 2026 03:40:07 +0000 (0:00:01.342) 0:03:34.742 ***** 2026-02-19 03:40:15.521330 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.521338 | orchestrator | 2026-02-19 03:40:15.521346 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-19 03:40:15.521378 | orchestrator | Thursday 19 February 2026 03:40:08 +0000 (0:00:00.776) 0:03:35.518 ***** 2026-02-19 03:40:15.521393 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 03:40:15.521408 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:40:15.521423 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:40:15.521437 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 03:40:15.521449 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-19 03:40:15.521457 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 03:40:15.521464 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 03:40:15.521472 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-02-19 03:40:15.521480 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 03:40:15.521494 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-19 03:40:15.521502 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-19 03:40:15.521510 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-19 03:40:15.521517 | orchestrator | 2026-02-19 03:40:15.521525 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-19 03:40:15.521533 | orchestrator | Thursday 19 February 2026 03:40:11 +0000 (0:00:03.293) 0:03:38.812 ***** 2026-02-19 03:40:15.521540 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:40:15.521548 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:40:15.521561 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:40:15.521569 | orchestrator | 2026-02-19 03:40:15.521577 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-19 03:40:15.521584 | orchestrator | Thursday 19 February 2026 03:40:13 +0000 (0:00:01.224) 0:03:40.036 ***** 2026-02-19 03:40:15.521592 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.521600 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.521607 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.521615 | orchestrator | 2026-02-19 03:40:15.521623 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-19 03:40:15.521630 | orchestrator | Thursday 19 February 2026 03:40:13 +0000 (0:00:00.599) 0:03:40.636 ***** 2026-02-19 03:40:15.521638 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:40:15.521646 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:40:15.521653 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:40:15.521661 | orchestrator | 2026-02-19 03:40:15.521668 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-19 03:40:15.521676 | orchestrator | Thursday 19 February 2026 03:40:14 +0000 (0:00:00.345) 0:03:40.982 ***** 2026-02-19 03:40:15.521684 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:40:15.521691 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:40:15.521699 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:40:15.521707 | orchestrator | 2026-02-19 03:40:15.521721 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-19 03:41:17.340972 | orchestrator | Thursday 19 February 2026 03:40:15 +0000 (0:00:01.446) 0:03:42.429 ***** 2026-02-19 03:41:17.342093 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:41:17.342151 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:41:17.342167 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:41:17.342181 | orchestrator | 2026-02-19 03:41:17.342196 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-19 03:41:17.342210 | orchestrator | Thursday 19 February 2026 03:40:16 +0000 (0:00:01.214) 0:03:43.643 ***** 2026-02-19 03:41:17.342224 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:17.342238 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:17.342250 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:17.342262 | orchestrator | 2026-02-19 03:41:17.342276 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-19 03:41:17.342288 | orchestrator | Thursday 19 February 2026 03:40:17 +0000 (0:00:00.569) 0:03:44.213 ***** 2026-02-19 03:41:17.342301 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:41:17.342312 | orchestrator | 2026-02-19 03:41:17.342324 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-19 03:41:17.342336 | orchestrator | Thursday 19 February 2026 03:40:17 +0000 (0:00:00.558) 0:03:44.771 ***** 2026-02-19 03:41:17.342371 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:17.342385 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:17.342397 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:17.342410 | orchestrator | 2026-02-19 03:41:17.342423 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-19 03:41:17.342435 | orchestrator | Thursday 19 February 2026 03:40:18 +0000 (0:00:00.334) 0:03:45.106 ***** 2026-02-19 03:41:17.342448 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:17.342494 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:17.342507 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:17.342520 | orchestrator | 2026-02-19 03:41:17.342528 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-19 03:41:17.342536 | orchestrator | Thursday 19 February 2026 03:40:18 +0000 (0:00:00.542) 0:03:45.649 ***** 2026-02-19 03:41:17.342543 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:41:17.342551 | orchestrator | 2026-02-19 03:41:17.342559 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-19 03:41:17.342566 | orchestrator | Thursday 19 February 2026 03:40:19 +0000 (0:00:00.547) 0:03:46.196 ***** 2026-02-19 03:41:17.342573 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:41:17.342580 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:41:17.342587 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:41:17.342594 | orchestrator | 2026-02-19 03:41:17.342602 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-19 03:41:17.342609 | orchestrator | Thursday 19 February 2026 03:40:21 +0000 (0:00:01.761) 0:03:47.958 ***** 2026-02-19 03:41:17.342616 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:41:17.342623 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:41:17.342630 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:41:17.342637 | orchestrator | 2026-02-19 03:41:17.342645 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-19 03:41:17.342652 | orchestrator | Thursday 19 February 2026 03:40:22 +0000 (0:00:01.426) 0:03:49.384 ***** 2026-02-19 03:41:17.342672 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:41:17.342688 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:41:17.342695 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:41:17.342703 | orchestrator | 2026-02-19 03:41:17.342710 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-19 03:41:17.342717 | orchestrator | Thursday 19 February 2026 03:40:24 +0000 (0:00:01.890) 0:03:51.274 ***** 2026-02-19 03:41:17.342724 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:41:17.342731 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:41:17.342738 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:41:17.342745 | orchestrator | 2026-02-19 03:41:17.342752 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-19 03:41:17.342759 | orchestrator | Thursday 19 February 2026 03:40:26 +0000 (0:00:02.016) 0:03:53.291 ***** 2026-02-19 03:41:17.342767 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:41:17.342774 | orchestrator | 2026-02-19 03:41:17.342781 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-19 03:41:17.342788 | orchestrator | Thursday 19 February 2026 03:40:27 +0000 (0:00:00.804) 0:03:54.096 ***** 2026-02-19 03:41:17.342809 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-19 03:41:17.342817 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:17.342825 | orchestrator | 2026-02-19 03:41:17.342832 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-19 03:41:17.342839 | orchestrator | Thursday 19 February 2026 03:40:49 +0000 (0:00:21.940) 0:04:16.036 ***** 2026-02-19 03:41:17.342847 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:17.342854 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:17.342861 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:17.342868 | orchestrator | 2026-02-19 03:41:17.342876 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-19 03:41:17.342883 | orchestrator | Thursday 19 February 2026 03:40:58 +0000 (0:00:08.986) 0:04:25.023 ***** 2026-02-19 03:41:17.342890 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:17.342897 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:17.342904 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:17.342918 | orchestrator | 2026-02-19 03:41:17.342925 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-19 03:41:17.342933 | orchestrator | Thursday 19 February 2026 03:40:58 +0000 (0:00:00.336) 0:04:25.360 ***** 2026-02-19 03:41:17.342964 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__16347f3488f08f76cbd2d6405d4829fac28c30d3'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-19 03:41:17.342975 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__16347f3488f08f76cbd2d6405d4829fac28c30d3'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-19 03:41:17.342984 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__16347f3488f08f76cbd2d6405d4829fac28c30d3'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-19 03:41:17.342993 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__16347f3488f08f76cbd2d6405d4829fac28c30d3'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-19 03:41:17.343001 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__16347f3488f08f76cbd2d6405d4829fac28c30d3'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-19 03:41:17.343010 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__16347f3488f08f76cbd2d6405d4829fac28c30d3'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__16347f3488f08f76cbd2d6405d4829fac28c30d3'}])  2026-02-19 03:41:17.343019 | orchestrator | 2026-02-19 03:41:17.343026 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 03:41:17.343033 | orchestrator | Thursday 19 February 2026 03:41:13 +0000 (0:00:15.303) 0:04:40.663 ***** 2026-02-19 03:41:17.343041 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:17.343048 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:17.343055 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:17.343062 | orchestrator | 2026-02-19 03:41:17.343069 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-19 03:41:17.343076 | orchestrator | Thursday 19 February 2026 03:41:14 +0000 (0:00:00.367) 0:04:41.031 ***** 2026-02-19 03:41:17.343083 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:41:17.343090 | orchestrator | 2026-02-19 03:41:17.343097 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-19 03:41:17.343105 | orchestrator | Thursday 19 February 2026 03:41:14 +0000 (0:00:00.793) 0:04:41.825 ***** 2026-02-19 03:41:17.343112 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:17.343119 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:17.343126 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:17.343133 | orchestrator | 2026-02-19 03:41:17.343140 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-19 03:41:17.343153 | orchestrator | Thursday 19 February 2026 03:41:15 +0000 (0:00:00.350) 0:04:42.175 ***** 2026-02-19 03:41:17.343165 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:17.343172 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:17.343179 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:17.343186 | orchestrator | 2026-02-19 03:41:17.343193 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-19 03:41:17.343200 | orchestrator | Thursday 19 February 2026 03:41:15 +0000 (0:00:00.351) 0:04:42.527 ***** 2026-02-19 03:41:17.343259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 03:41:17.343268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 03:41:17.343276 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 03:41:17.343283 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:17.343290 | orchestrator | 2026-02-19 03:41:17.343297 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-19 03:41:17.343304 | orchestrator | Thursday 19 February 2026 03:41:16 +0000 (0:00:00.880) 0:04:43.408 ***** 2026-02-19 03:41:17.343311 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:17.343318 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:17.343330 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:17.343347 | orchestrator | 2026-02-19 03:41:17.343420 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-19 03:41:17.343431 | orchestrator | 2026-02-19 03:41:17.343453 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 03:41:44.113590 | orchestrator | Thursday 19 February 2026 03:41:17 +0000 (0:00:00.838) 0:04:44.246 ***** 2026-02-19 03:41:44.113728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:41:44.113750 | orchestrator | 2026-02-19 03:41:44.113764 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 03:41:44.113775 | orchestrator | Thursday 19 February 2026 03:41:17 +0000 (0:00:00.515) 0:04:44.762 ***** 2026-02-19 03:41:44.113787 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:41:44.113798 | orchestrator | 2026-02-19 03:41:44.113809 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 03:41:44.113820 | orchestrator | Thursday 19 February 2026 03:41:18 +0000 (0:00:00.755) 0:04:45.517 ***** 2026-02-19 03:41:44.113832 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.113844 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.113855 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.113866 | orchestrator | 2026-02-19 03:41:44.113877 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 03:41:44.113888 | orchestrator | Thursday 19 February 2026 03:41:19 +0000 (0:00:00.787) 0:04:46.305 ***** 2026-02-19 03:41:44.113899 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.113911 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.113922 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.113933 | orchestrator | 2026-02-19 03:41:44.113944 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 03:41:44.113955 | orchestrator | Thursday 19 February 2026 03:41:19 +0000 (0:00:00.325) 0:04:46.630 ***** 2026-02-19 03:41:44.113966 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.113977 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.113988 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.113999 | orchestrator | 2026-02-19 03:41:44.114010 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 03:41:44.114078 | orchestrator | Thursday 19 February 2026 03:41:20 +0000 (0:00:00.590) 0:04:47.220 ***** 2026-02-19 03:41:44.114091 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.114105 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.114146 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.114159 | orchestrator | 2026-02-19 03:41:44.114172 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 03:41:44.114185 | orchestrator | Thursday 19 February 2026 03:41:20 +0000 (0:00:00.326) 0:04:47.547 ***** 2026-02-19 03:41:44.114197 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.114210 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.114223 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.114234 | orchestrator | 2026-02-19 03:41:44.114247 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 03:41:44.114261 | orchestrator | Thursday 19 February 2026 03:41:21 +0000 (0:00:00.764) 0:04:48.311 ***** 2026-02-19 03:41:44.114273 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.114287 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.114306 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.114323 | orchestrator | 2026-02-19 03:41:44.114341 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 03:41:44.114389 | orchestrator | Thursday 19 February 2026 03:41:21 +0000 (0:00:00.301) 0:04:48.613 ***** 2026-02-19 03:41:44.114407 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.114427 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.114441 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.114454 | orchestrator | 2026-02-19 03:41:44.114465 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 03:41:44.114476 | orchestrator | Thursday 19 February 2026 03:41:22 +0000 (0:00:00.576) 0:04:49.189 ***** 2026-02-19 03:41:44.114487 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.114498 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.114509 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.114519 | orchestrator | 2026-02-19 03:41:44.114530 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 03:41:44.114541 | orchestrator | Thursday 19 February 2026 03:41:23 +0000 (0:00:00.743) 0:04:49.933 ***** 2026-02-19 03:41:44.114552 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.114563 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.114573 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.114585 | orchestrator | 2026-02-19 03:41:44.114596 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 03:41:44.114607 | orchestrator | Thursday 19 February 2026 03:41:23 +0000 (0:00:00.774) 0:04:50.707 ***** 2026-02-19 03:41:44.114617 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.114628 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.114654 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.114665 | orchestrator | 2026-02-19 03:41:44.114676 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 03:41:44.114687 | orchestrator | Thursday 19 February 2026 03:41:24 +0000 (0:00:00.324) 0:04:51.031 ***** 2026-02-19 03:41:44.114698 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.114709 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.114719 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.114730 | orchestrator | 2026-02-19 03:41:44.114741 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 03:41:44.114752 | orchestrator | Thursday 19 February 2026 03:41:24 +0000 (0:00:00.597) 0:04:51.629 ***** 2026-02-19 03:41:44.114763 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.114774 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.114785 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.114796 | orchestrator | 2026-02-19 03:41:44.114806 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 03:41:44.114817 | orchestrator | Thursday 19 February 2026 03:41:25 +0000 (0:00:00.316) 0:04:51.946 ***** 2026-02-19 03:41:44.114828 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.114839 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.114850 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.114861 | orchestrator | 2026-02-19 03:41:44.114900 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 03:41:44.114912 | orchestrator | Thursday 19 February 2026 03:41:25 +0000 (0:00:00.318) 0:04:52.264 ***** 2026-02-19 03:41:44.114923 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.114934 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.114945 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.114956 | orchestrator | 2026-02-19 03:41:44.114966 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 03:41:44.114977 | orchestrator | Thursday 19 February 2026 03:41:25 +0000 (0:00:00.331) 0:04:52.596 ***** 2026-02-19 03:41:44.114988 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.114999 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.115009 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.115024 | orchestrator | 2026-02-19 03:41:44.115042 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 03:41:44.115059 | orchestrator | Thursday 19 February 2026 03:41:26 +0000 (0:00:00.591) 0:04:53.187 ***** 2026-02-19 03:41:44.115075 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.115091 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.115107 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.115123 | orchestrator | 2026-02-19 03:41:44.115139 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 03:41:44.115155 | orchestrator | Thursday 19 February 2026 03:41:26 +0000 (0:00:00.333) 0:04:53.521 ***** 2026-02-19 03:41:44.115173 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.115191 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.115207 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.115223 | orchestrator | 2026-02-19 03:41:44.115239 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 03:41:44.115256 | orchestrator | Thursday 19 February 2026 03:41:26 +0000 (0:00:00.345) 0:04:53.866 ***** 2026-02-19 03:41:44.115274 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.115291 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.115308 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.115326 | orchestrator | 2026-02-19 03:41:44.115342 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 03:41:44.115452 | orchestrator | Thursday 19 February 2026 03:41:27 +0000 (0:00:00.334) 0:04:54.200 ***** 2026-02-19 03:41:44.115470 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.115486 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.115503 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.115521 | orchestrator | 2026-02-19 03:41:44.115538 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-19 03:41:44.115555 | orchestrator | Thursday 19 February 2026 03:41:27 +0000 (0:00:00.670) 0:04:54.871 ***** 2026-02-19 03:41:44.115571 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 03:41:44.115588 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:41:44.115605 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:41:44.115620 | orchestrator | 2026-02-19 03:41:44.115636 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-19 03:41:44.115653 | orchestrator | Thursday 19 February 2026 03:41:28 +0000 (0:00:00.585) 0:04:55.456 ***** 2026-02-19 03:41:44.115671 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:41:44.115689 | orchestrator | 2026-02-19 03:41:44.115706 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-19 03:41:44.115723 | orchestrator | Thursday 19 February 2026 03:41:29 +0000 (0:00:00.485) 0:04:55.941 ***** 2026-02-19 03:41:44.115740 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:41:44.115758 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:41:44.115776 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:41:44.115794 | orchestrator | 2026-02-19 03:41:44.115812 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-19 03:41:44.115848 | orchestrator | Thursday 19 February 2026 03:41:29 +0000 (0:00:00.857) 0:04:56.799 ***** 2026-02-19 03:41:44.115867 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:41:44.115884 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:41:44.115900 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:41:44.115917 | orchestrator | 2026-02-19 03:41:44.115932 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-19 03:41:44.115949 | orchestrator | Thursday 19 February 2026 03:41:30 +0000 (0:00:00.290) 0:04:57.089 ***** 2026-02-19 03:41:44.115966 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 03:41:44.115983 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 03:41:44.116003 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 03:41:44.116023 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-19 03:41:44.116040 | orchestrator | 2026-02-19 03:41:44.116070 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-19 03:41:44.116090 | orchestrator | Thursday 19 February 2026 03:41:41 +0000 (0:00:11.060) 0:05:08.149 ***** 2026-02-19 03:41:44.116108 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:41:44.116126 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:41:44.116144 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:41:44.116161 | orchestrator | 2026-02-19 03:41:44.116180 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-19 03:41:44.116198 | orchestrator | Thursday 19 February 2026 03:41:41 +0000 (0:00:00.368) 0:05:08.518 ***** 2026-02-19 03:41:44.116217 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-19 03:41:44.116235 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-19 03:41:44.116246 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-19 03:41:44.116257 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-19 03:41:44.116267 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:41:44.116278 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:41:44.116289 | orchestrator | 2026-02-19 03:41:44.116300 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-19 03:41:44.116328 | orchestrator | Thursday 19 February 2026 03:41:44 +0000 (0:00:02.506) 0:05:11.024 ***** 2026-02-19 03:42:44.614989 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-19 03:42:44.615068 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-19 03:42:44.615081 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-19 03:42:44.615085 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 03:42:44.615090 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-19 03:42:44.615094 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-19 03:42:44.615099 | orchestrator | 2026-02-19 03:42:44.615103 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-19 03:42:44.615108 | orchestrator | Thursday 19 February 2026 03:41:45 +0000 (0:00:01.200) 0:05:12.224 ***** 2026-02-19 03:42:44.615113 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:42:44.615117 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:42:44.615121 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:42:44.615125 | orchestrator | 2026-02-19 03:42:44.615129 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-19 03:42:44.615133 | orchestrator | Thursday 19 February 2026 03:41:45 +0000 (0:00:00.652) 0:05:12.876 ***** 2026-02-19 03:42:44.615138 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:42:44.615142 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:42:44.615146 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:42:44.615150 | orchestrator | 2026-02-19 03:42:44.615153 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-19 03:42:44.615157 | orchestrator | Thursday 19 February 2026 03:41:46 +0000 (0:00:00.293) 0:05:13.170 ***** 2026-02-19 03:42:44.615175 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:42:44.615180 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:42:44.615183 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:42:44.615187 | orchestrator | 2026-02-19 03:42:44.615191 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-19 03:42:44.615195 | orchestrator | Thursday 19 February 2026 03:41:46 +0000 (0:00:00.453) 0:05:13.624 ***** 2026-02-19 03:42:44.615199 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:42:44.615203 | orchestrator | 2026-02-19 03:42:44.615207 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-19 03:42:44.615211 | orchestrator | Thursday 19 February 2026 03:41:47 +0000 (0:00:00.488) 0:05:14.112 ***** 2026-02-19 03:42:44.615215 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:42:44.615218 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:42:44.615222 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:42:44.615226 | orchestrator | 2026-02-19 03:42:44.615230 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-19 03:42:44.615233 | orchestrator | Thursday 19 February 2026 03:41:47 +0000 (0:00:00.291) 0:05:14.404 ***** 2026-02-19 03:42:44.615237 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:42:44.615241 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:42:44.615245 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:42:44.615248 | orchestrator | 2026-02-19 03:42:44.615252 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-19 03:42:44.615256 | orchestrator | Thursday 19 February 2026 03:41:47 +0000 (0:00:00.472) 0:05:14.876 ***** 2026-02-19 03:42:44.615260 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:42:44.615264 | orchestrator | 2026-02-19 03:42:44.615268 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-19 03:42:44.615272 | orchestrator | Thursday 19 February 2026 03:41:48 +0000 (0:00:00.483) 0:05:15.360 ***** 2026-02-19 03:42:44.615276 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:42:44.615279 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:42:44.615283 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:42:44.615287 | orchestrator | 2026-02-19 03:42:44.615291 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-19 03:42:44.615295 | orchestrator | Thursday 19 February 2026 03:41:49 +0000 (0:00:01.216) 0:05:16.576 ***** 2026-02-19 03:42:44.615298 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:42:44.615302 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:42:44.615306 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:42:44.615310 | orchestrator | 2026-02-19 03:42:44.615313 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-19 03:42:44.615317 | orchestrator | Thursday 19 February 2026 03:41:51 +0000 (0:00:01.361) 0:05:17.938 ***** 2026-02-19 03:42:44.615321 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:42:44.615324 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:42:44.615329 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:42:44.615332 | orchestrator | 2026-02-19 03:42:44.615336 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-19 03:42:44.615382 | orchestrator | Thursday 19 February 2026 03:41:52 +0000 (0:00:01.881) 0:05:19.819 ***** 2026-02-19 03:42:44.615387 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:42:44.615391 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:42:44.615395 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:42:44.615399 | orchestrator | 2026-02-19 03:42:44.615402 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-19 03:42:44.615406 | orchestrator | Thursday 19 February 2026 03:41:54 +0000 (0:00:02.064) 0:05:21.884 ***** 2026-02-19 03:42:44.615410 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:42:44.615413 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:42:44.615417 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-19 03:42:44.615425 | orchestrator | 2026-02-19 03:42:44.615429 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-19 03:42:44.615433 | orchestrator | Thursday 19 February 2026 03:41:55 +0000 (0:00:00.657) 0:05:22.541 ***** 2026-02-19 03:42:44.615437 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-19 03:42:44.615441 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-19 03:42:44.615453 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-19 03:42:44.615458 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-19 03:42:44.615461 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-19 03:42:44.615465 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:42:44.615469 | orchestrator | 2026-02-19 03:42:44.615473 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-19 03:42:44.615477 | orchestrator | Thursday 19 February 2026 03:42:26 +0000 (0:00:30.394) 0:05:52.936 ***** 2026-02-19 03:42:44.615480 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:42:44.615484 | orchestrator | 2026-02-19 03:42:44.615488 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-19 03:42:44.615491 | orchestrator | Thursday 19 February 2026 03:42:27 +0000 (0:00:01.367) 0:05:54.303 ***** 2026-02-19 03:42:44.615495 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:42:44.615499 | orchestrator | 2026-02-19 03:42:44.615503 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-19 03:42:44.615515 | orchestrator | Thursday 19 February 2026 03:42:27 +0000 (0:00:00.319) 0:05:54.623 ***** 2026-02-19 03:42:44.615519 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:42:44.615522 | orchestrator | 2026-02-19 03:42:44.615532 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-19 03:42:44.615536 | orchestrator | Thursday 19 February 2026 03:42:27 +0000 (0:00:00.159) 0:05:54.783 ***** 2026-02-19 03:42:44.615540 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-19 03:42:44.615543 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-19 03:42:44.615547 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-19 03:42:44.615551 | orchestrator | 2026-02-19 03:42:44.615556 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-19 03:42:44.615563 | orchestrator | Thursday 19 February 2026 03:42:34 +0000 (0:00:06.511) 0:06:01.294 ***** 2026-02-19 03:42:44.615570 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-19 03:42:44.615579 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-19 03:42:44.615587 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-19 03:42:44.615595 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-19 03:42:44.615602 | orchestrator | 2026-02-19 03:42:44.615609 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 03:42:44.615616 | orchestrator | Thursday 19 February 2026 03:42:39 +0000 (0:00:05.153) 0:06:06.448 ***** 2026-02-19 03:42:44.615622 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:42:44.615629 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:42:44.615635 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:42:44.615642 | orchestrator | 2026-02-19 03:42:44.615648 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-19 03:42:44.615654 | orchestrator | Thursday 19 February 2026 03:42:40 +0000 (0:00:00.708) 0:06:07.156 ***** 2026-02-19 03:42:44.615660 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:42:44.615671 | orchestrator | 2026-02-19 03:42:44.615678 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-19 03:42:44.615685 | orchestrator | Thursday 19 February 2026 03:42:40 +0000 (0:00:00.545) 0:06:07.702 ***** 2026-02-19 03:42:44.615691 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:42:44.615698 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:42:44.615704 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:42:44.615710 | orchestrator | 2026-02-19 03:42:44.615717 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-19 03:42:44.615724 | orchestrator | Thursday 19 February 2026 03:42:41 +0000 (0:00:00.590) 0:06:08.292 ***** 2026-02-19 03:42:44.615730 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:42:44.615737 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:42:44.615744 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:42:44.615750 | orchestrator | 2026-02-19 03:42:44.615757 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-19 03:42:44.615765 | orchestrator | Thursday 19 February 2026 03:42:42 +0000 (0:00:01.277) 0:06:09.570 ***** 2026-02-19 03:42:44.615786 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 03:42:44.615793 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 03:42:44.615803 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 03:42:44.615809 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:42:44.615815 | orchestrator | 2026-02-19 03:42:44.615821 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-19 03:42:44.615827 | orchestrator | Thursday 19 February 2026 03:42:43 +0000 (0:00:00.642) 0:06:10.212 ***** 2026-02-19 03:42:44.615833 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:42:44.615840 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:42:44.615846 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:42:44.615852 | orchestrator | 2026-02-19 03:42:44.615859 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-19 03:42:44.615866 | orchestrator | 2026-02-19 03:42:44.615872 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 03:42:44.615878 | orchestrator | Thursday 19 February 2026 03:42:43 +0000 (0:00:00.549) 0:06:10.762 ***** 2026-02-19 03:42:44.615885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:42:44.615893 | orchestrator | 2026-02-19 03:42:44.615899 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 03:42:44.615913 | orchestrator | Thursday 19 February 2026 03:42:44 +0000 (0:00:00.764) 0:06:11.526 ***** 2026-02-19 03:43:02.319018 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:43:02.319100 | orchestrator | 2026-02-19 03:43:02.319110 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 03:43:02.319117 | orchestrator | Thursday 19 February 2026 03:42:45 +0000 (0:00:00.736) 0:06:12.263 ***** 2026-02-19 03:43:02.319123 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319130 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319136 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319142 | orchestrator | 2026-02-19 03:43:02.319148 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 03:43:02.319154 | orchestrator | Thursday 19 February 2026 03:42:45 +0000 (0:00:00.346) 0:06:12.610 ***** 2026-02-19 03:43:02.319160 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319167 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319172 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319178 | orchestrator | 2026-02-19 03:43:02.319184 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 03:43:02.319190 | orchestrator | Thursday 19 February 2026 03:42:46 +0000 (0:00:00.748) 0:06:13.358 ***** 2026-02-19 03:43:02.319245 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319252 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319257 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319263 | orchestrator | 2026-02-19 03:43:02.319269 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 03:43:02.319275 | orchestrator | Thursday 19 February 2026 03:42:47 +0000 (0:00:00.750) 0:06:14.108 ***** 2026-02-19 03:43:02.319280 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319286 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319291 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319297 | orchestrator | 2026-02-19 03:43:02.319303 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 03:43:02.319309 | orchestrator | Thursday 19 February 2026 03:42:48 +0000 (0:00:00.996) 0:06:15.105 ***** 2026-02-19 03:43:02.319314 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319320 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319332 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319338 | orchestrator | 2026-02-19 03:43:02.319344 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 03:43:02.319371 | orchestrator | Thursday 19 February 2026 03:42:48 +0000 (0:00:00.320) 0:06:15.425 ***** 2026-02-19 03:43:02.319378 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319384 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319389 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319395 | orchestrator | 2026-02-19 03:43:02.319400 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 03:43:02.319406 | orchestrator | Thursday 19 February 2026 03:42:48 +0000 (0:00:00.300) 0:06:15.726 ***** 2026-02-19 03:43:02.319412 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319418 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319424 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319429 | orchestrator | 2026-02-19 03:43:02.319435 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 03:43:02.319443 | orchestrator | Thursday 19 February 2026 03:42:49 +0000 (0:00:00.299) 0:06:16.025 ***** 2026-02-19 03:43:02.319453 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319463 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319478 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319488 | orchestrator | 2026-02-19 03:43:02.319498 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 03:43:02.319508 | orchestrator | Thursday 19 February 2026 03:42:50 +0000 (0:00:00.995) 0:06:17.020 ***** 2026-02-19 03:43:02.319518 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319528 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319537 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319545 | orchestrator | 2026-02-19 03:43:02.319554 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 03:43:02.319562 | orchestrator | Thursday 19 February 2026 03:42:50 +0000 (0:00:00.776) 0:06:17.797 ***** 2026-02-19 03:43:02.319572 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319582 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319592 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319602 | orchestrator | 2026-02-19 03:43:02.319612 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 03:43:02.319622 | orchestrator | Thursday 19 February 2026 03:42:51 +0000 (0:00:00.362) 0:06:18.159 ***** 2026-02-19 03:43:02.319631 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319642 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319651 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319661 | orchestrator | 2026-02-19 03:43:02.319682 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 03:43:02.319689 | orchestrator | Thursday 19 February 2026 03:42:51 +0000 (0:00:00.300) 0:06:18.460 ***** 2026-02-19 03:43:02.319694 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319700 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319712 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319718 | orchestrator | 2026-02-19 03:43:02.319724 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 03:43:02.319729 | orchestrator | Thursday 19 February 2026 03:42:52 +0000 (0:00:00.593) 0:06:19.053 ***** 2026-02-19 03:43:02.319735 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319741 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319747 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319753 | orchestrator | 2026-02-19 03:43:02.319758 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 03:43:02.319764 | orchestrator | Thursday 19 February 2026 03:42:52 +0000 (0:00:00.339) 0:06:19.393 ***** 2026-02-19 03:43:02.319770 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319775 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319781 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319787 | orchestrator | 2026-02-19 03:43:02.319793 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 03:43:02.319798 | orchestrator | Thursday 19 February 2026 03:42:52 +0000 (0:00:00.349) 0:06:19.742 ***** 2026-02-19 03:43:02.319804 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319822 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319828 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319834 | orchestrator | 2026-02-19 03:43:02.319840 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 03:43:02.319846 | orchestrator | Thursday 19 February 2026 03:42:53 +0000 (0:00:00.322) 0:06:20.065 ***** 2026-02-19 03:43:02.319852 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319857 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319863 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319869 | orchestrator | 2026-02-19 03:43:02.319874 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 03:43:02.319880 | orchestrator | Thursday 19 February 2026 03:42:53 +0000 (0:00:00.602) 0:06:20.668 ***** 2026-02-19 03:43:02.319886 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.319891 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.319897 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.319902 | orchestrator | 2026-02-19 03:43:02.319908 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 03:43:02.319914 | orchestrator | Thursday 19 February 2026 03:42:54 +0000 (0:00:00.322) 0:06:20.990 ***** 2026-02-19 03:43:02.319920 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319925 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319931 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319937 | orchestrator | 2026-02-19 03:43:02.319942 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 03:43:02.319948 | orchestrator | Thursday 19 February 2026 03:42:54 +0000 (0:00:00.342) 0:06:21.333 ***** 2026-02-19 03:43:02.319954 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319959 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319965 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.319971 | orchestrator | 2026-02-19 03:43:02.319976 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-19 03:43:02.319982 | orchestrator | Thursday 19 February 2026 03:42:55 +0000 (0:00:00.810) 0:06:22.144 ***** 2026-02-19 03:43:02.319988 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.319993 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.319999 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.320004 | orchestrator | 2026-02-19 03:43:02.320010 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-19 03:43:02.320016 | orchestrator | Thursday 19 February 2026 03:42:55 +0000 (0:00:00.348) 0:06:22.492 ***** 2026-02-19 03:43:02.320022 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:43:02.320028 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:43:02.320039 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:43:02.320045 | orchestrator | 2026-02-19 03:43:02.320051 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-19 03:43:02.320057 | orchestrator | Thursday 19 February 2026 03:42:56 +0000 (0:00:00.682) 0:06:23.175 ***** 2026-02-19 03:43:02.320063 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:43:02.320069 | orchestrator | 2026-02-19 03:43:02.320074 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-19 03:43:02.320080 | orchestrator | Thursday 19 February 2026 03:42:56 +0000 (0:00:00.550) 0:06:23.725 ***** 2026-02-19 03:43:02.320086 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.320092 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.320097 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.320103 | orchestrator | 2026-02-19 03:43:02.320109 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-19 03:43:02.320114 | orchestrator | Thursday 19 February 2026 03:42:57 +0000 (0:00:00.604) 0:06:24.329 ***** 2026-02-19 03:43:02.320120 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:43:02.320126 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:43:02.320131 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:43:02.320137 | orchestrator | 2026-02-19 03:43:02.320142 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-19 03:43:02.320148 | orchestrator | Thursday 19 February 2026 03:42:57 +0000 (0:00:00.324) 0:06:24.653 ***** 2026-02-19 03:43:02.320154 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.320159 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.320165 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.320171 | orchestrator | 2026-02-19 03:43:02.320176 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-19 03:43:02.320182 | orchestrator | Thursday 19 February 2026 03:42:58 +0000 (0:00:00.628) 0:06:25.282 ***** 2026-02-19 03:43:02.320188 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:43:02.320197 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:43:02.320203 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:43:02.320208 | orchestrator | 2026-02-19 03:43:02.320214 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-19 03:43:02.320220 | orchestrator | Thursday 19 February 2026 03:42:59 +0000 (0:00:00.657) 0:06:25.939 ***** 2026-02-19 03:43:02.320226 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-19 03:43:02.320232 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-19 03:43:02.320238 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-19 03:43:02.320243 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-19 03:43:02.320249 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-19 03:43:02.320255 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-19 03:43:02.320260 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-19 03:43:02.320270 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-19 03:44:17.277942 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-19 03:44:17.278068 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-19 03:44:17.278078 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-19 03:44:17.278084 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-19 03:44:17.278090 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-19 03:44:17.278113 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-19 03:44:17.278119 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-19 03:44:17.278125 | orchestrator | 2026-02-19 03:44:17.278131 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-19 03:44:17.278137 | orchestrator | Thursday 19 February 2026 03:43:02 +0000 (0:00:03.284) 0:06:29.224 ***** 2026-02-19 03:44:17.278143 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:17.278149 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:17.278154 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:17.278159 | orchestrator | 2026-02-19 03:44:17.278165 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-19 03:44:17.278170 | orchestrator | Thursday 19 February 2026 03:43:02 +0000 (0:00:00.325) 0:06:29.549 ***** 2026-02-19 03:44:17.278176 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:44:17.278182 | orchestrator | 2026-02-19 03:44:17.278188 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-19 03:44:17.278193 | orchestrator | Thursday 19 February 2026 03:43:03 +0000 (0:00:00.801) 0:06:30.351 ***** 2026-02-19 03:44:17.278199 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-19 03:44:17.278205 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-19 03:44:17.278210 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-19 03:44:17.278216 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-19 03:44:17.278222 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-19 03:44:17.278227 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-19 03:44:17.278232 | orchestrator | 2026-02-19 03:44:17.278238 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-19 03:44:17.278243 | orchestrator | Thursday 19 February 2026 03:43:04 +0000 (0:00:01.131) 0:06:31.482 ***** 2026-02-19 03:44:17.278248 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:44:17.278254 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 03:44:17.278259 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 03:44:17.278265 | orchestrator | 2026-02-19 03:44:17.278270 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-19 03:44:17.278275 | orchestrator | Thursday 19 February 2026 03:43:06 +0000 (0:00:02.228) 0:06:33.711 ***** 2026-02-19 03:44:17.278281 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-19 03:44:17.278286 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 03:44:17.278293 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:44:17.278302 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-19 03:44:17.278313 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-19 03:44:17.278325 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:44:17.278335 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-19 03:44:17.278343 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 03:44:17.278352 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:44:17.278407 | orchestrator | 2026-02-19 03:44:17.278416 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-19 03:44:17.278425 | orchestrator | Thursday 19 February 2026 03:43:07 +0000 (0:00:01.209) 0:06:34.920 ***** 2026-02-19 03:44:17.278434 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:44:17.278442 | orchestrator | 2026-02-19 03:44:17.278450 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-19 03:44:17.278475 | orchestrator | Thursday 19 February 2026 03:43:10 +0000 (0:00:02.265) 0:06:37.186 ***** 2026-02-19 03:44:17.278485 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:44:17.278501 | orchestrator | 2026-02-19 03:44:17.278508 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-19 03:44:17.278514 | orchestrator | Thursday 19 February 2026 03:43:11 +0000 (0:00:00.796) 0:06:37.982 ***** 2026-02-19 03:44:17.278522 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}) 2026-02-19 03:44:17.278530 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}) 2026-02-19 03:44:17.278537 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}) 2026-02-19 03:44:17.278543 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}) 2026-02-19 03:44:17.278563 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}) 2026-02-19 03:44:17.278570 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}) 2026-02-19 03:44:17.278576 | orchestrator | 2026-02-19 03:44:17.278583 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-19 03:44:17.278589 | orchestrator | Thursday 19 February 2026 03:43:58 +0000 (0:00:47.795) 0:07:25.778 ***** 2026-02-19 03:44:17.278595 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:17.278600 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:17.278607 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:17.278613 | orchestrator | 2026-02-19 03:44:17.278620 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-19 03:44:17.278629 | orchestrator | Thursday 19 February 2026 03:43:59 +0000 (0:00:00.317) 0:07:26.095 ***** 2026-02-19 03:44:17.278637 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:44:17.278647 | orchestrator | 2026-02-19 03:44:17.278657 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-19 03:44:17.278666 | orchestrator | Thursday 19 February 2026 03:43:59 +0000 (0:00:00.793) 0:07:26.889 ***** 2026-02-19 03:44:17.278674 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:44:17.278685 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:44:17.278691 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:44:17.278697 | orchestrator | 2026-02-19 03:44:17.278703 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-19 03:44:17.278710 | orchestrator | Thursday 19 February 2026 03:44:00 +0000 (0:00:00.686) 0:07:27.576 ***** 2026-02-19 03:44:17.278716 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:44:17.278722 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:44:17.278728 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:44:17.278734 | orchestrator | 2026-02-19 03:44:17.278740 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-19 03:44:17.278746 | orchestrator | Thursday 19 February 2026 03:44:03 +0000 (0:00:02.724) 0:07:30.300 ***** 2026-02-19 03:44:17.278752 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:44:17.278758 | orchestrator | 2026-02-19 03:44:17.278764 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-19 03:44:17.278770 | orchestrator | Thursday 19 February 2026 03:44:04 +0000 (0:00:00.857) 0:07:31.158 ***** 2026-02-19 03:44:17.278776 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:44:17.278782 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:44:17.278789 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:44:17.278794 | orchestrator | 2026-02-19 03:44:17.278806 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-19 03:44:17.278812 | orchestrator | Thursday 19 February 2026 03:44:05 +0000 (0:00:01.247) 0:07:32.405 ***** 2026-02-19 03:44:17.278818 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:44:17.278825 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:44:17.278831 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:44:17.278837 | orchestrator | 2026-02-19 03:44:17.278843 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-19 03:44:17.278849 | orchestrator | Thursday 19 February 2026 03:44:06 +0000 (0:00:01.222) 0:07:33.627 ***** 2026-02-19 03:44:17.278855 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:44:17.278860 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:44:17.278865 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:44:17.278871 | orchestrator | 2026-02-19 03:44:17.278876 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-19 03:44:17.278881 | orchestrator | Thursday 19 February 2026 03:44:08 +0000 (0:00:02.226) 0:07:35.854 ***** 2026-02-19 03:44:17.278887 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:17.278892 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:17.278897 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:17.278903 | orchestrator | 2026-02-19 03:44:17.278908 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-19 03:44:17.278913 | orchestrator | Thursday 19 February 2026 03:44:09 +0000 (0:00:00.352) 0:07:36.206 ***** 2026-02-19 03:44:17.278918 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:17.278924 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:17.278929 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:17.278934 | orchestrator | 2026-02-19 03:44:17.278939 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-19 03:44:17.278949 | orchestrator | Thursday 19 February 2026 03:44:09 +0000 (0:00:00.359) 0:07:36.566 ***** 2026-02-19 03:44:17.278954 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-19 03:44:17.278959 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-19 03:44:17.278965 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-19 03:44:17.278970 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 03:44:17.278975 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-19 03:44:17.278980 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-02-19 03:44:17.278986 | orchestrator | 2026-02-19 03:44:17.278991 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-19 03:44:17.278996 | orchestrator | Thursday 19 February 2026 03:44:10 +0000 (0:00:01.217) 0:07:37.784 ***** 2026-02-19 03:44:17.279002 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-19 03:44:17.279007 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-19 03:44:17.279012 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-19 03:44:17.279017 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-19 03:44:17.279023 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-19 03:44:17.279028 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-19 03:44:17.279033 | orchestrator | 2026-02-19 03:44:17.279039 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-19 03:44:17.279044 | orchestrator | Thursday 19 February 2026 03:44:13 +0000 (0:00:02.542) 0:07:40.326 ***** 2026-02-19 03:44:17.279049 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-19 03:44:17.279062 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-19 03:44:47.921015 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-19 03:44:47.921114 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-19 03:44:47.921126 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-19 03:44:47.921134 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-19 03:44:47.921143 | orchestrator | 2026-02-19 03:44:47.921152 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-19 03:44:47.921162 | orchestrator | Thursday 19 February 2026 03:44:17 +0000 (0:00:03.861) 0:07:44.188 ***** 2026-02-19 03:44:47.921192 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921201 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.921209 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:44:47.921217 | orchestrator | 2026-02-19 03:44:47.921225 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-19 03:44:47.921233 | orchestrator | Thursday 19 February 2026 03:44:19 +0000 (0:00:02.187) 0:07:46.376 ***** 2026-02-19 03:44:47.921240 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921248 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.921256 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-19 03:44:47.921265 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:44:47.921273 | orchestrator | 2026-02-19 03:44:47.921281 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-19 03:44:47.921289 | orchestrator | Thursday 19 February 2026 03:44:31 +0000 (0:00:12.512) 0:07:58.889 ***** 2026-02-19 03:44:47.921297 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921305 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.921312 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:47.921321 | orchestrator | 2026-02-19 03:44:47.921329 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 03:44:47.921336 | orchestrator | Thursday 19 February 2026 03:44:33 +0000 (0:00:01.193) 0:08:00.082 ***** 2026-02-19 03:44:47.921344 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921352 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.921403 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:47.921414 | orchestrator | 2026-02-19 03:44:47.921439 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-19 03:44:47.921456 | orchestrator | Thursday 19 February 2026 03:44:33 +0000 (0:00:00.346) 0:08:00.429 ***** 2026-02-19 03:44:47.921465 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:44:47.921474 | orchestrator | 2026-02-19 03:44:47.921481 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-19 03:44:47.921489 | orchestrator | Thursday 19 February 2026 03:44:34 +0000 (0:00:00.902) 0:08:01.331 ***** 2026-02-19 03:44:47.921497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:44:47.921505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:44:47.921513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:44:47.921521 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921529 | orchestrator | 2026-02-19 03:44:47.921536 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-19 03:44:47.921545 | orchestrator | Thursday 19 February 2026 03:44:34 +0000 (0:00:00.417) 0:08:01.749 ***** 2026-02-19 03:44:47.921554 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921564 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.921573 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:47.921582 | orchestrator | 2026-02-19 03:44:47.921591 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-19 03:44:47.921600 | orchestrator | Thursday 19 February 2026 03:44:35 +0000 (0:00:00.360) 0:08:02.109 ***** 2026-02-19 03:44:47.921609 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921618 | orchestrator | 2026-02-19 03:44:47.921627 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-19 03:44:47.921637 | orchestrator | Thursday 19 February 2026 03:44:35 +0000 (0:00:00.249) 0:08:02.359 ***** 2026-02-19 03:44:47.921652 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921676 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.921691 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:47.921705 | orchestrator | 2026-02-19 03:44:47.921720 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-19 03:44:47.921760 | orchestrator | Thursday 19 February 2026 03:44:35 +0000 (0:00:00.562) 0:08:02.921 ***** 2026-02-19 03:44:47.921776 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921791 | orchestrator | 2026-02-19 03:44:47.921807 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-19 03:44:47.921822 | orchestrator | Thursday 19 February 2026 03:44:36 +0000 (0:00:00.236) 0:08:03.157 ***** 2026-02-19 03:44:47.921832 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921841 | orchestrator | 2026-02-19 03:44:47.921851 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-19 03:44:47.921859 | orchestrator | Thursday 19 February 2026 03:44:36 +0000 (0:00:00.223) 0:08:03.380 ***** 2026-02-19 03:44:47.921867 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921874 | orchestrator | 2026-02-19 03:44:47.921882 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-19 03:44:47.921890 | orchestrator | Thursday 19 February 2026 03:44:36 +0000 (0:00:00.141) 0:08:03.522 ***** 2026-02-19 03:44:47.921898 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921905 | orchestrator | 2026-02-19 03:44:47.921913 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-19 03:44:47.921921 | orchestrator | Thursday 19 February 2026 03:44:36 +0000 (0:00:00.248) 0:08:03.770 ***** 2026-02-19 03:44:47.921929 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.921937 | orchestrator | 2026-02-19 03:44:47.921945 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-19 03:44:47.921953 | orchestrator | Thursday 19 February 2026 03:44:37 +0000 (0:00:00.261) 0:08:04.031 ***** 2026-02-19 03:44:47.921977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:44:47.921986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:44:47.921993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:44:47.922001 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.922009 | orchestrator | 2026-02-19 03:44:47.922069 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-19 03:44:47.922077 | orchestrator | Thursday 19 February 2026 03:44:37 +0000 (0:00:00.445) 0:08:04.477 ***** 2026-02-19 03:44:47.922085 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.922093 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.922101 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:47.922108 | orchestrator | 2026-02-19 03:44:47.922116 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-19 03:44:47.922123 | orchestrator | Thursday 19 February 2026 03:44:37 +0000 (0:00:00.325) 0:08:04.802 ***** 2026-02-19 03:44:47.922131 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.922139 | orchestrator | 2026-02-19 03:44:47.922146 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-19 03:44:47.922154 | orchestrator | Thursday 19 February 2026 03:44:38 +0000 (0:00:00.233) 0:08:05.036 ***** 2026-02-19 03:44:47.922162 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.922169 | orchestrator | 2026-02-19 03:44:47.922177 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-19 03:44:47.922185 | orchestrator | 2026-02-19 03:44:47.922193 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 03:44:47.922201 | orchestrator | Thursday 19 February 2026 03:44:39 +0000 (0:00:01.246) 0:08:06.282 ***** 2026-02-19 03:44:47.922210 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:44:47.922219 | orchestrator | 2026-02-19 03:44:47.922227 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 03:44:47.922235 | orchestrator | Thursday 19 February 2026 03:44:40 +0000 (0:00:01.188) 0:08:07.470 ***** 2026-02-19 03:44:47.922243 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:44:47.922258 | orchestrator | 2026-02-19 03:44:47.922266 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 03:44:47.922274 | orchestrator | Thursday 19 February 2026 03:44:41 +0000 (0:00:01.258) 0:08:08.729 ***** 2026-02-19 03:44:47.922282 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.922290 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.922297 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:47.922305 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:44:47.922313 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:44:47.922321 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:44:47.922329 | orchestrator | 2026-02-19 03:44:47.922336 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 03:44:47.922345 | orchestrator | Thursday 19 February 2026 03:44:43 +0000 (0:00:01.301) 0:08:10.031 ***** 2026-02-19 03:44:47.922357 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:44:47.922442 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:44:47.922455 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:44:47.922467 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:44:47.922479 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:44:47.922492 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:44:47.922503 | orchestrator | 2026-02-19 03:44:47.922514 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 03:44:47.922526 | orchestrator | Thursday 19 February 2026 03:44:43 +0000 (0:00:00.787) 0:08:10.818 ***** 2026-02-19 03:44:47.922538 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:44:47.922551 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:44:47.922563 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:44:47.922575 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:44:47.922587 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:44:47.922600 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:44:47.922613 | orchestrator | 2026-02-19 03:44:47.922626 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 03:44:47.922639 | orchestrator | Thursday 19 February 2026 03:44:44 +0000 (0:00:01.066) 0:08:11.884 ***** 2026-02-19 03:44:47.922652 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:44:47.922666 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:44:47.922679 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:44:47.922701 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:44:47.922715 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:44:47.922728 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:44:47.922740 | orchestrator | 2026-02-19 03:44:47.922752 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 03:44:47.922765 | orchestrator | Thursday 19 February 2026 03:44:45 +0000 (0:00:00.785) 0:08:12.670 ***** 2026-02-19 03:44:47.922778 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.922790 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.922802 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:47.922816 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:44:47.922858 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:44:47.922871 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:44:47.922884 | orchestrator | 2026-02-19 03:44:47.922897 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 03:44:47.922909 | orchestrator | Thursday 19 February 2026 03:44:47 +0000 (0:00:01.304) 0:08:13.974 ***** 2026-02-19 03:44:47.922922 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:44:47.922935 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:44:47.922949 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:44:47.922957 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:44:47.922965 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:44:47.922973 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:44:47.922980 | orchestrator | 2026-02-19 03:44:47.922988 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 03:44:47.923045 | orchestrator | Thursday 19 February 2026 03:44:47 +0000 (0:00:00.652) 0:08:14.626 ***** 2026-02-19 03:44:47.923065 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:21.644188 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:21.644269 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:21.644275 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:45:21.644280 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:45:21.644284 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:45:21.644288 | orchestrator | 2026-02-19 03:45:21.644293 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 03:45:21.644299 | orchestrator | Thursday 19 February 2026 03:44:48 +0000 (0:00:00.934) 0:08:15.561 ***** 2026-02-19 03:45:21.644303 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:21.644308 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:21.644312 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:21.644315 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:21.644319 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:45:21.644332 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:45:21.644337 | orchestrator | 2026-02-19 03:45:21.644341 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 03:45:21.644351 | orchestrator | Thursday 19 February 2026 03:44:49 +0000 (0:00:01.120) 0:08:16.681 ***** 2026-02-19 03:45:21.644355 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:21.644359 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:21.644379 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:21.644386 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:21.644390 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:45:21.644394 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:45:21.644398 | orchestrator | 2026-02-19 03:45:21.644402 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 03:45:21.644406 | orchestrator | Thursday 19 February 2026 03:44:52 +0000 (0:00:02.301) 0:08:18.983 ***** 2026-02-19 03:45:21.644410 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:21.644414 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:21.644418 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:21.644422 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:45:21.644426 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:45:21.644430 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:45:21.644434 | orchestrator | 2026-02-19 03:45:21.644437 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 03:45:21.644441 | orchestrator | Thursday 19 February 2026 03:44:52 +0000 (0:00:00.646) 0:08:19.629 ***** 2026-02-19 03:45:21.644445 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:21.644449 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:21.644453 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:21.644456 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:21.644460 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:45:21.644464 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:45:21.644467 | orchestrator | 2026-02-19 03:45:21.644471 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 03:45:21.644475 | orchestrator | Thursday 19 February 2026 03:44:53 +0000 (0:00:01.031) 0:08:20.661 ***** 2026-02-19 03:45:21.644479 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:21.644483 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:21.644486 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:21.644490 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:45:21.644494 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:45:21.644497 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:45:21.644501 | orchestrator | 2026-02-19 03:45:21.644505 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 03:45:21.644509 | orchestrator | Thursday 19 February 2026 03:44:54 +0000 (0:00:00.699) 0:08:21.360 ***** 2026-02-19 03:45:21.644513 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:21.644516 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:21.644520 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:21.644540 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:45:21.644544 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:45:21.644548 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:45:21.644552 | orchestrator | 2026-02-19 03:45:21.644555 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 03:45:21.644559 | orchestrator | Thursday 19 February 2026 03:44:55 +0000 (0:00:01.044) 0:08:22.405 ***** 2026-02-19 03:45:21.644563 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:21.644566 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:21.644570 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:21.644574 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:45:21.644578 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:45:21.644581 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:45:21.644585 | orchestrator | 2026-02-19 03:45:21.644589 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 03:45:21.644592 | orchestrator | Thursday 19 February 2026 03:44:56 +0000 (0:00:00.643) 0:08:23.049 ***** 2026-02-19 03:45:21.644596 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:21.644600 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:21.644604 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:21.644608 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:45:21.644611 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:45:21.644615 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:45:21.644619 | orchestrator | 2026-02-19 03:45:21.644623 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 03:45:21.644627 | orchestrator | Thursday 19 February 2026 03:44:57 +0000 (0:00:00.964) 0:08:24.013 ***** 2026-02-19 03:45:21.644631 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:21.644635 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:21.644638 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:21.644642 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:45:21.644646 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:45:21.644649 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:45:21.644653 | orchestrator | 2026-02-19 03:45:21.644657 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 03:45:21.644661 | orchestrator | Thursday 19 February 2026 03:44:57 +0000 (0:00:00.667) 0:08:24.681 ***** 2026-02-19 03:45:21.644664 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:21.644668 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:21.644672 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:21.644676 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:21.644679 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:45:21.644683 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:45:21.644687 | orchestrator | 2026-02-19 03:45:21.644690 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 03:45:21.644704 | orchestrator | Thursday 19 February 2026 03:44:58 +0000 (0:00:00.972) 0:08:25.653 ***** 2026-02-19 03:45:21.644708 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:21.644712 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:21.644716 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:21.644719 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:21.644723 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:45:21.644727 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:45:21.644730 | orchestrator | 2026-02-19 03:45:21.644734 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 03:45:21.644767 | orchestrator | Thursday 19 February 2026 03:44:59 +0000 (0:00:00.638) 0:08:26.292 ***** 2026-02-19 03:45:21.644772 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:21.644776 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:21.644780 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:21.644785 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:21.644789 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:45:21.644793 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:45:21.644797 | orchestrator | 2026-02-19 03:45:21.644801 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-19 03:45:21.644810 | orchestrator | Thursday 19 February 2026 03:45:00 +0000 (0:00:01.492) 0:08:27.784 ***** 2026-02-19 03:45:21.644815 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:45:21.644819 | orchestrator | 2026-02-19 03:45:21.644824 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-19 03:45:21.644828 | orchestrator | Thursday 19 February 2026 03:45:04 +0000 (0:00:04.016) 0:08:31.801 ***** 2026-02-19 03:45:21.644833 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:45:21.644837 | orchestrator | 2026-02-19 03:45:21.644842 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-19 03:45:21.644846 | orchestrator | Thursday 19 February 2026 03:45:07 +0000 (0:00:02.325) 0:08:34.126 ***** 2026-02-19 03:45:21.644851 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:45:21.644854 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:45:21.644858 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:45:21.644862 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:21.644866 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:45:21.644872 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:45:21.644878 | orchestrator | 2026-02-19 03:45:21.644886 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-19 03:45:21.644896 | orchestrator | Thursday 19 February 2026 03:45:08 +0000 (0:00:01.791) 0:08:35.918 ***** 2026-02-19 03:45:21.644901 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:45:21.644907 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:45:21.644913 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:45:21.644918 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:45:21.644924 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:45:21.644930 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:45:21.644935 | orchestrator | 2026-02-19 03:45:21.644941 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-19 03:45:21.644947 | orchestrator | Thursday 19 February 2026 03:45:10 +0000 (0:00:01.351) 0:08:37.269 ***** 2026-02-19 03:45:21.644955 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:45:21.644963 | orchestrator | 2026-02-19 03:45:21.644970 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-19 03:45:21.644976 | orchestrator | Thursday 19 February 2026 03:45:11 +0000 (0:00:01.401) 0:08:38.671 ***** 2026-02-19 03:45:21.644982 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:45:21.644988 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:45:21.644994 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:45:21.645000 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:45:21.645007 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:45:21.645013 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:45:21.645017 | orchestrator | 2026-02-19 03:45:21.645021 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-19 03:45:21.645025 | orchestrator | Thursday 19 February 2026 03:45:13 +0000 (0:00:01.630) 0:08:40.301 ***** 2026-02-19 03:45:21.645029 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:45:21.645033 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:45:21.645037 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:45:21.645041 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:45:21.645044 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:45:21.645048 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:45:21.645052 | orchestrator | 2026-02-19 03:45:21.645056 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-19 03:45:21.645064 | orchestrator | Thursday 19 February 2026 03:45:17 +0000 (0:00:03.714) 0:08:44.016 ***** 2026-02-19 03:45:21.645068 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:45:21.645072 | orchestrator | 2026-02-19 03:45:21.645080 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-19 03:45:21.645084 | orchestrator | Thursday 19 February 2026 03:45:18 +0000 (0:00:01.292) 0:08:45.308 ***** 2026-02-19 03:45:21.645087 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:21.645091 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:21.645095 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:21.645098 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:21.645102 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:45:21.645106 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:45:21.645109 | orchestrator | 2026-02-19 03:45:21.645113 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-19 03:45:21.645117 | orchestrator | Thursday 19 February 2026 03:45:19 +0000 (0:00:00.670) 0:08:45.979 ***** 2026-02-19 03:45:21.645121 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:45:21.645124 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:45:21.645128 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:45:21.645132 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:45:21.645135 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:45:21.645139 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:45:21.645143 | orchestrator | 2026-02-19 03:45:21.645146 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-19 03:45:21.645156 | orchestrator | Thursday 19 February 2026 03:45:21 +0000 (0:00:02.571) 0:08:48.550 ***** 2026-02-19 03:45:50.142628 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.142728 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.142739 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.142748 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:45:50.142757 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:45:50.142765 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:45:50.142773 | orchestrator | 2026-02-19 03:45:50.142783 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-19 03:45:50.142792 | orchestrator | 2026-02-19 03:45:50.142801 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 03:45:50.142809 | orchestrator | Thursday 19 February 2026 03:45:22 +0000 (0:00:00.925) 0:08:49.476 ***** 2026-02-19 03:45:50.142818 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:45:50.142827 | orchestrator | 2026-02-19 03:45:50.142835 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 03:45:50.142843 | orchestrator | Thursday 19 February 2026 03:45:23 +0000 (0:00:00.763) 0:08:50.240 ***** 2026-02-19 03:45:50.142851 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:45:50.142859 | orchestrator | 2026-02-19 03:45:50.142867 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 03:45:50.142875 | orchestrator | Thursday 19 February 2026 03:45:23 +0000 (0:00:00.501) 0:08:50.742 ***** 2026-02-19 03:45:50.142883 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.142891 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.142899 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.142907 | orchestrator | 2026-02-19 03:45:50.142915 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 03:45:50.142923 | orchestrator | Thursday 19 February 2026 03:45:24 +0000 (0:00:00.540) 0:08:51.282 ***** 2026-02-19 03:45:50.142931 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.142939 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.142946 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.142954 | orchestrator | 2026-02-19 03:45:50.142962 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 03:45:50.142970 | orchestrator | Thursday 19 February 2026 03:45:25 +0000 (0:00:00.831) 0:08:52.113 ***** 2026-02-19 03:45:50.142978 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.142986 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143018 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143026 | orchestrator | 2026-02-19 03:45:50.143034 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 03:45:50.143042 | orchestrator | Thursday 19 February 2026 03:45:25 +0000 (0:00:00.723) 0:08:52.837 ***** 2026-02-19 03:45:50.143050 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.143058 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143066 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143073 | orchestrator | 2026-02-19 03:45:50.143081 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 03:45:50.143089 | orchestrator | Thursday 19 February 2026 03:45:26 +0000 (0:00:00.695) 0:08:53.533 ***** 2026-02-19 03:45:50.143097 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.143105 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143113 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143120 | orchestrator | 2026-02-19 03:45:50.143128 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 03:45:50.143136 | orchestrator | Thursday 19 February 2026 03:45:27 +0000 (0:00:00.619) 0:08:54.152 ***** 2026-02-19 03:45:50.143145 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.143159 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143171 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143183 | orchestrator | 2026-02-19 03:45:50.143196 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 03:45:50.143211 | orchestrator | Thursday 19 February 2026 03:45:27 +0000 (0:00:00.352) 0:08:54.504 ***** 2026-02-19 03:45:50.143226 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.143240 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143254 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143264 | orchestrator | 2026-02-19 03:45:50.143272 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 03:45:50.143281 | orchestrator | Thursday 19 February 2026 03:45:27 +0000 (0:00:00.352) 0:08:54.857 ***** 2026-02-19 03:45:50.143288 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.143296 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143304 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143312 | orchestrator | 2026-02-19 03:45:50.143333 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 03:45:50.143341 | orchestrator | Thursday 19 February 2026 03:45:28 +0000 (0:00:00.990) 0:08:55.848 ***** 2026-02-19 03:45:50.143349 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.143357 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143364 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143398 | orchestrator | 2026-02-19 03:45:50.143407 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 03:45:50.143415 | orchestrator | Thursday 19 February 2026 03:45:29 +0000 (0:00:00.788) 0:08:56.636 ***** 2026-02-19 03:45:50.143423 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.143430 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143438 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143446 | orchestrator | 2026-02-19 03:45:50.143454 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 03:45:50.143462 | orchestrator | Thursday 19 February 2026 03:45:30 +0000 (0:00:00.331) 0:08:56.968 ***** 2026-02-19 03:45:50.143470 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.143477 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143485 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143493 | orchestrator | 2026-02-19 03:45:50.143500 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 03:45:50.143508 | orchestrator | Thursday 19 February 2026 03:45:30 +0000 (0:00:00.347) 0:08:57.315 ***** 2026-02-19 03:45:50.143516 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.143524 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143531 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143539 | orchestrator | 2026-02-19 03:45:50.143562 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 03:45:50.143579 | orchestrator | Thursday 19 February 2026 03:45:31 +0000 (0:00:00.759) 0:08:58.074 ***** 2026-02-19 03:45:50.143586 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.143594 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143602 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143610 | orchestrator | 2026-02-19 03:45:50.143618 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 03:45:50.143626 | orchestrator | Thursday 19 February 2026 03:45:31 +0000 (0:00:00.375) 0:08:58.450 ***** 2026-02-19 03:45:50.143633 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.143641 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143649 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143657 | orchestrator | 2026-02-19 03:45:50.143664 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 03:45:50.143672 | orchestrator | Thursday 19 February 2026 03:45:31 +0000 (0:00:00.370) 0:08:58.821 ***** 2026-02-19 03:45:50.143680 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.143688 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143696 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143703 | orchestrator | 2026-02-19 03:45:50.143711 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 03:45:50.143719 | orchestrator | Thursday 19 February 2026 03:45:32 +0000 (0:00:00.336) 0:08:59.157 ***** 2026-02-19 03:45:50.143726 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.143734 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143742 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143749 | orchestrator | 2026-02-19 03:45:50.143757 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 03:45:50.143765 | orchestrator | Thursday 19 February 2026 03:45:32 +0000 (0:00:00.603) 0:08:59.761 ***** 2026-02-19 03:45:50.143773 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.143780 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143788 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143796 | orchestrator | 2026-02-19 03:45:50.143804 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 03:45:50.143811 | orchestrator | Thursday 19 February 2026 03:45:33 +0000 (0:00:00.333) 0:09:00.095 ***** 2026-02-19 03:45:50.143819 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.143827 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143835 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143842 | orchestrator | 2026-02-19 03:45:50.143850 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 03:45:50.143858 | orchestrator | Thursday 19 February 2026 03:45:33 +0000 (0:00:00.365) 0:09:00.460 ***** 2026-02-19 03:45:50.143866 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:45:50.143873 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:45:50.143881 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:45:50.143889 | orchestrator | 2026-02-19 03:45:50.143897 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-19 03:45:50.143905 | orchestrator | Thursday 19 February 2026 03:45:34 +0000 (0:00:00.861) 0:09:01.322 ***** 2026-02-19 03:45:50.143913 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:45:50.143921 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:45:50.143929 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-19 03:45:50.143937 | orchestrator | 2026-02-19 03:45:50.143945 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-19 03:45:50.143953 | orchestrator | Thursday 19 February 2026 03:45:34 +0000 (0:00:00.447) 0:09:01.769 ***** 2026-02-19 03:45:50.143960 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:45:50.143968 | orchestrator | 2026-02-19 03:45:50.143976 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-19 03:45:50.143984 | orchestrator | Thursday 19 February 2026 03:45:37 +0000 (0:00:02.290) 0:09:04.060 ***** 2026-02-19 03:45:50.143999 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-19 03:45:50.144010 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:45:50.144018 | orchestrator | 2026-02-19 03:45:50.144025 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-19 03:45:50.144037 | orchestrator | Thursday 19 February 2026 03:45:37 +0000 (0:00:00.225) 0:09:04.286 ***** 2026-02-19 03:45:50.144048 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-19 03:45:50.144064 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-19 03:45:50.144072 | orchestrator | 2026-02-19 03:45:50.144080 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-19 03:45:50.144087 | orchestrator | Thursday 19 February 2026 03:45:45 +0000 (0:00:08.593) 0:09:12.879 ***** 2026-02-19 03:45:50.144095 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 03:45:50.144103 | orchestrator | 2026-02-19 03:45:50.144110 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-19 03:45:50.144118 | orchestrator | Thursday 19 February 2026 03:45:49 +0000 (0:00:03.516) 0:09:16.396 ***** 2026-02-19 03:45:50.144126 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:45:50.144134 | orchestrator | 2026-02-19 03:45:50.144147 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-19 03:46:15.655833 | orchestrator | Thursday 19 February 2026 03:45:50 +0000 (0:00:00.659) 0:09:17.055 ***** 2026-02-19 03:46:15.655960 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-19 03:46:15.655977 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-19 03:46:15.655989 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-19 03:46:15.656001 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-19 03:46:15.656012 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-19 03:46:15.656023 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-19 03:46:15.656034 | orchestrator | 2026-02-19 03:46:15.656046 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-19 03:46:15.656057 | orchestrator | Thursday 19 February 2026 03:45:51 +0000 (0:00:01.054) 0:09:18.109 ***** 2026-02-19 03:46:15.656068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:46:15.656079 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 03:46:15.656091 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 03:46:15.656101 | orchestrator | 2026-02-19 03:46:15.656112 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-19 03:46:15.656123 | orchestrator | Thursday 19 February 2026 03:45:53 +0000 (0:00:02.349) 0:09:20.459 ***** 2026-02-19 03:46:15.656135 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-19 03:46:15.656147 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 03:46:15.656158 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:15.656169 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-19 03:46:15.656188 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-19 03:46:15.656207 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:15.656254 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-19 03:46:15.656266 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 03:46:15.656277 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:15.656288 | orchestrator | 2026-02-19 03:46:15.656298 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-19 03:46:15.656309 | orchestrator | Thursday 19 February 2026 03:45:54 +0000 (0:00:01.153) 0:09:21.613 ***** 2026-02-19 03:46:15.656319 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:15.656331 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:15.656343 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:15.656356 | orchestrator | 2026-02-19 03:46:15.656368 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-19 03:46:15.656415 | orchestrator | Thursday 19 February 2026 03:45:57 +0000 (0:00:02.820) 0:09:24.434 ***** 2026-02-19 03:46:15.656428 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:15.656440 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:15.656452 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:15.656465 | orchestrator | 2026-02-19 03:46:15.656477 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-19 03:46:15.656496 | orchestrator | Thursday 19 February 2026 03:45:57 +0000 (0:00:00.291) 0:09:24.725 ***** 2026-02-19 03:46:15.656515 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:46:15.656535 | orchestrator | 2026-02-19 03:46:15.656556 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-19 03:46:15.656581 | orchestrator | Thursday 19 February 2026 03:45:58 +0000 (0:00:00.493) 0:09:25.219 ***** 2026-02-19 03:46:15.656606 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:46:15.656625 | orchestrator | 2026-02-19 03:46:15.656643 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-19 03:46:15.656659 | orchestrator | Thursday 19 February 2026 03:45:58 +0000 (0:00:00.640) 0:09:25.860 ***** 2026-02-19 03:46:15.656678 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:15.656696 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:15.656715 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:15.656733 | orchestrator | 2026-02-19 03:46:15.656789 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-19 03:46:15.656811 | orchestrator | Thursday 19 February 2026 03:46:00 +0000 (0:00:01.264) 0:09:27.125 ***** 2026-02-19 03:46:15.656828 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:15.656845 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:15.656865 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:15.656885 | orchestrator | 2026-02-19 03:46:15.656901 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-19 03:46:15.656919 | orchestrator | Thursday 19 February 2026 03:46:01 +0000 (0:00:01.324) 0:09:28.449 ***** 2026-02-19 03:46:15.656937 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:15.656956 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:15.656975 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:15.656994 | orchestrator | 2026-02-19 03:46:15.657013 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-19 03:46:15.657032 | orchestrator | Thursday 19 February 2026 03:46:03 +0000 (0:00:01.851) 0:09:30.301 ***** 2026-02-19 03:46:15.657051 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:15.657072 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:15.657092 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:15.657110 | orchestrator | 2026-02-19 03:46:15.657128 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-19 03:46:15.657146 | orchestrator | Thursday 19 February 2026 03:46:05 +0000 (0:00:02.158) 0:09:32.459 ***** 2026-02-19 03:46:15.657166 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:15.657185 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:15.657230 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:15.657241 | orchestrator | 2026-02-19 03:46:15.657252 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 03:46:15.657290 | orchestrator | Thursday 19 February 2026 03:46:06 +0000 (0:00:01.353) 0:09:33.812 ***** 2026-02-19 03:46:15.657307 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:15.657322 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:15.657339 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:15.657354 | orchestrator | 2026-02-19 03:46:15.657394 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-19 03:46:15.657414 | orchestrator | Thursday 19 February 2026 03:46:07 +0000 (0:00:00.642) 0:09:34.454 ***** 2026-02-19 03:46:15.657432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:46:15.657450 | orchestrator | 2026-02-19 03:46:15.657468 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-19 03:46:15.657486 | orchestrator | Thursday 19 February 2026 03:46:08 +0000 (0:00:00.657) 0:09:35.112 ***** 2026-02-19 03:46:15.657502 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:15.657520 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:15.657538 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:15.657556 | orchestrator | 2026-02-19 03:46:15.657574 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-19 03:46:15.657592 | orchestrator | Thursday 19 February 2026 03:46:08 +0000 (0:00:00.301) 0:09:35.414 ***** 2026-02-19 03:46:15.657610 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:15.657629 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:15.657641 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:15.657652 | orchestrator | 2026-02-19 03:46:15.657662 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-19 03:46:15.657673 | orchestrator | Thursday 19 February 2026 03:46:09 +0000 (0:00:01.176) 0:09:36.590 ***** 2026-02-19 03:46:15.657684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:46:15.657695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:46:15.657706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:46:15.657717 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:15.657727 | orchestrator | 2026-02-19 03:46:15.657738 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-19 03:46:15.657748 | orchestrator | Thursday 19 February 2026 03:46:10 +0000 (0:00:00.740) 0:09:37.331 ***** 2026-02-19 03:46:15.657759 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:15.657770 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:15.657780 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:15.657791 | orchestrator | 2026-02-19 03:46:15.657801 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-19 03:46:15.657812 | orchestrator | 2026-02-19 03:46:15.657822 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 03:46:15.657833 | orchestrator | Thursday 19 February 2026 03:46:11 +0000 (0:00:00.666) 0:09:37.997 ***** 2026-02-19 03:46:15.657844 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:46:15.657856 | orchestrator | 2026-02-19 03:46:15.657867 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 03:46:15.657877 | orchestrator | Thursday 19 February 2026 03:46:11 +0000 (0:00:00.507) 0:09:38.505 ***** 2026-02-19 03:46:15.657888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:46:15.657899 | orchestrator | 2026-02-19 03:46:15.657909 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 03:46:15.657920 | orchestrator | Thursday 19 February 2026 03:46:12 +0000 (0:00:00.671) 0:09:39.176 ***** 2026-02-19 03:46:15.657930 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:15.657953 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:15.657964 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:15.657974 | orchestrator | 2026-02-19 03:46:15.657985 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 03:46:15.657995 | orchestrator | Thursday 19 February 2026 03:46:12 +0000 (0:00:00.285) 0:09:39.462 ***** 2026-02-19 03:46:15.658006 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:15.658087 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:15.658100 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:15.658111 | orchestrator | 2026-02-19 03:46:15.658121 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 03:46:15.658141 | orchestrator | Thursday 19 February 2026 03:46:13 +0000 (0:00:00.689) 0:09:40.151 ***** 2026-02-19 03:46:15.658152 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:15.658162 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:15.658173 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:15.658183 | orchestrator | 2026-02-19 03:46:15.658194 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 03:46:15.658205 | orchestrator | Thursday 19 February 2026 03:46:13 +0000 (0:00:00.728) 0:09:40.879 ***** 2026-02-19 03:46:15.658215 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:15.658225 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:15.658236 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:15.658246 | orchestrator | 2026-02-19 03:46:15.658257 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 03:46:15.658268 | orchestrator | Thursday 19 February 2026 03:46:14 +0000 (0:00:00.951) 0:09:41.831 ***** 2026-02-19 03:46:15.658278 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:15.658289 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:15.658299 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:15.658310 | orchestrator | 2026-02-19 03:46:15.658320 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 03:46:15.658331 | orchestrator | Thursday 19 February 2026 03:46:15 +0000 (0:00:00.288) 0:09:42.119 ***** 2026-02-19 03:46:15.658342 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:15.658353 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:15.658363 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:15.658401 | orchestrator | 2026-02-19 03:46:15.658421 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 03:46:15.658434 | orchestrator | Thursday 19 February 2026 03:46:15 +0000 (0:00:00.281) 0:09:42.401 ***** 2026-02-19 03:46:15.658460 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:37.410118 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:37.410307 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:37.410335 | orchestrator | 2026-02-19 03:46:37.410350 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 03:46:37.410364 | orchestrator | Thursday 19 February 2026 03:46:15 +0000 (0:00:00.474) 0:09:42.876 ***** 2026-02-19 03:46:37.410431 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:37.410446 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:37.410457 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:37.410468 | orchestrator | 2026-02-19 03:46:37.410479 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 03:46:37.410490 | orchestrator | Thursday 19 February 2026 03:46:16 +0000 (0:00:00.744) 0:09:43.620 ***** 2026-02-19 03:46:37.410501 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:37.410512 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:37.410523 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:37.410534 | orchestrator | 2026-02-19 03:46:37.410546 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 03:46:37.410558 | orchestrator | Thursday 19 February 2026 03:46:17 +0000 (0:00:00.715) 0:09:44.335 ***** 2026-02-19 03:46:37.410570 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:37.410583 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:37.410596 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:37.410638 | orchestrator | 2026-02-19 03:46:37.410651 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 03:46:37.410663 | orchestrator | Thursday 19 February 2026 03:46:17 +0000 (0:00:00.275) 0:09:44.610 ***** 2026-02-19 03:46:37.410675 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:37.410689 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:37.410700 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:37.410712 | orchestrator | 2026-02-19 03:46:37.410724 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 03:46:37.410735 | orchestrator | Thursday 19 February 2026 03:46:18 +0000 (0:00:00.466) 0:09:45.077 ***** 2026-02-19 03:46:37.410746 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:37.410759 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:37.410771 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:37.410784 | orchestrator | 2026-02-19 03:46:37.410796 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 03:46:37.410808 | orchestrator | Thursday 19 February 2026 03:46:18 +0000 (0:00:00.365) 0:09:45.443 ***** 2026-02-19 03:46:37.410820 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:37.410832 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:37.410843 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:37.410854 | orchestrator | 2026-02-19 03:46:37.410865 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 03:46:37.410877 | orchestrator | Thursday 19 February 2026 03:46:18 +0000 (0:00:00.313) 0:09:45.757 ***** 2026-02-19 03:46:37.410888 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:37.410899 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:37.410910 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:37.410921 | orchestrator | 2026-02-19 03:46:37.410933 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 03:46:37.410946 | orchestrator | Thursday 19 February 2026 03:46:19 +0000 (0:00:00.316) 0:09:46.073 ***** 2026-02-19 03:46:37.410958 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:37.410969 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:37.410979 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:37.410990 | orchestrator | 2026-02-19 03:46:37.411002 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 03:46:37.411013 | orchestrator | Thursday 19 February 2026 03:46:19 +0000 (0:00:00.513) 0:09:46.587 ***** 2026-02-19 03:46:37.411024 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:37.411035 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:37.411047 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:37.411057 | orchestrator | 2026-02-19 03:46:37.411068 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 03:46:37.411079 | orchestrator | Thursday 19 February 2026 03:46:19 +0000 (0:00:00.310) 0:09:46.897 ***** 2026-02-19 03:46:37.411090 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:37.411100 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:37.411111 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:37.411123 | orchestrator | 2026-02-19 03:46:37.411135 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 03:46:37.411146 | orchestrator | Thursday 19 February 2026 03:46:20 +0000 (0:00:00.284) 0:09:47.182 ***** 2026-02-19 03:46:37.411175 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:37.411188 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:37.411198 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:37.411209 | orchestrator | 2026-02-19 03:46:37.411219 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 03:46:37.411230 | orchestrator | Thursday 19 February 2026 03:46:20 +0000 (0:00:00.335) 0:09:47.517 ***** 2026-02-19 03:46:37.411241 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:46:37.411252 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:46:37.411262 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:46:37.411274 | orchestrator | 2026-02-19 03:46:37.411285 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-19 03:46:37.411309 | orchestrator | Thursday 19 February 2026 03:46:21 +0000 (0:00:00.740) 0:09:48.257 ***** 2026-02-19 03:46:37.411323 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:46:37.411335 | orchestrator | 2026-02-19 03:46:37.411346 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-19 03:46:37.411357 | orchestrator | Thursday 19 February 2026 03:46:21 +0000 (0:00:00.513) 0:09:48.771 ***** 2026-02-19 03:46:37.411368 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:46:37.411400 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 03:46:37.411409 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 03:46:37.411415 | orchestrator | 2026-02-19 03:46:37.411426 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-19 03:46:37.411438 | orchestrator | Thursday 19 February 2026 03:46:24 +0000 (0:00:02.331) 0:09:51.103 ***** 2026-02-19 03:46:37.411472 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-19 03:46:37.411486 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 03:46:37.411498 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:37.411510 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-19 03:46:37.411522 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-19 03:46:37.411529 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:37.411536 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-19 03:46:37.411542 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 03:46:37.411549 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:37.411555 | orchestrator | 2026-02-19 03:46:37.411562 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-19 03:46:37.411569 | orchestrator | Thursday 19 February 2026 03:46:25 +0000 (0:00:01.647) 0:09:52.750 ***** 2026-02-19 03:46:37.411576 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:46:37.411582 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:46:37.411588 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:46:37.411595 | orchestrator | 2026-02-19 03:46:37.411602 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-19 03:46:37.411608 | orchestrator | Thursday 19 February 2026 03:46:26 +0000 (0:00:00.351) 0:09:53.101 ***** 2026-02-19 03:46:37.411615 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:46:37.411622 | orchestrator | 2026-02-19 03:46:37.411628 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-19 03:46:37.411635 | orchestrator | Thursday 19 February 2026 03:46:26 +0000 (0:00:00.544) 0:09:53.646 ***** 2026-02-19 03:46:37.411643 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 03:46:37.411652 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 03:46:37.411658 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 03:46:37.411665 | orchestrator | 2026-02-19 03:46:37.411672 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-19 03:46:37.411678 | orchestrator | Thursday 19 February 2026 03:46:27 +0000 (0:00:01.191) 0:09:54.837 ***** 2026-02-19 03:46:37.411685 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:46:37.411692 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-19 03:46:37.411699 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:46:37.411712 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-19 03:46:37.411719 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:46:37.411726 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-19 03:46:37.411732 | orchestrator | 2026-02-19 03:46:37.411739 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-19 03:46:37.411745 | orchestrator | Thursday 19 February 2026 03:46:32 +0000 (0:00:04.667) 0:09:59.504 ***** 2026-02-19 03:46:37.411752 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:46:37.411759 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 03:46:37.411765 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:46:37.411777 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 03:46:37.411788 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:46:37.411798 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 03:46:37.411809 | orchestrator | 2026-02-19 03:46:37.411820 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-19 03:46:37.411832 | orchestrator | Thursday 19 February 2026 03:46:35 +0000 (0:00:02.427) 0:10:01.932 ***** 2026-02-19 03:46:37.411844 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-19 03:46:37.411855 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:46:37.411867 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-19 03:46:37.411874 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:46:37.411881 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-19 03:46:37.411888 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:46:37.411894 | orchestrator | 2026-02-19 03:46:37.411901 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-19 03:46:37.411907 | orchestrator | Thursday 19 February 2026 03:46:36 +0000 (0:00:01.499) 0:10:03.431 ***** 2026-02-19 03:46:37.411914 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-19 03:46:37.411921 | orchestrator | 2026-02-19 03:46:37.411927 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-19 03:46:37.411934 | orchestrator | Thursday 19 February 2026 03:46:36 +0000 (0:00:00.252) 0:10:03.683 ***** 2026-02-19 03:46:37.411941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:46:37.411953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917742 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:21.917755 | orchestrator | 2026-02-19 03:47:21.917768 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-19 03:47:21.917781 | orchestrator | Thursday 19 February 2026 03:46:37 +0000 (0:00:00.637) 0:10:04.321 ***** 2026-02-19 03:47:21.917792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 03:47:21.917900 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:21.917911 | orchestrator | 2026-02-19 03:47:21.917923 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-19 03:47:21.917934 | orchestrator | Thursday 19 February 2026 03:46:38 +0000 (0:00:00.617) 0:10:04.938 ***** 2026-02-19 03:47:21.917945 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 03:47:21.917958 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 03:47:21.917969 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 03:47:21.917980 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 03:47:21.917990 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 03:47:21.918001 | orchestrator | 2026-02-19 03:47:21.918088 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-19 03:47:21.918102 | orchestrator | Thursday 19 February 2026 03:47:08 +0000 (0:00:30.855) 0:10:35.794 ***** 2026-02-19 03:47:21.918113 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:21.918124 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:21.918135 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:21.918146 | orchestrator | 2026-02-19 03:47:21.918168 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-19 03:47:21.918181 | orchestrator | Thursday 19 February 2026 03:47:09 +0000 (0:00:00.300) 0:10:36.094 ***** 2026-02-19 03:47:21.918205 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:21.918224 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:21.918242 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:21.918269 | orchestrator | 2026-02-19 03:47:21.918292 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-19 03:47:21.918310 | orchestrator | Thursday 19 February 2026 03:47:09 +0000 (0:00:00.280) 0:10:36.374 ***** 2026-02-19 03:47:21.918329 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:47:21.918347 | orchestrator | 2026-02-19 03:47:21.918365 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-19 03:47:21.918408 | orchestrator | Thursday 19 February 2026 03:47:10 +0000 (0:00:00.664) 0:10:37.038 ***** 2026-02-19 03:47:21.918428 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:47:21.918447 | orchestrator | 2026-02-19 03:47:21.918467 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-19 03:47:21.918484 | orchestrator | Thursday 19 February 2026 03:47:10 +0000 (0:00:00.481) 0:10:37.519 ***** 2026-02-19 03:47:21.918504 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:47:21.918522 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:47:21.918539 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:47:21.918556 | orchestrator | 2026-02-19 03:47:21.918573 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-19 03:47:21.918605 | orchestrator | Thursday 19 February 2026 03:47:12 +0000 (0:00:01.428) 0:10:38.948 ***** 2026-02-19 03:47:21.918623 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:47:21.918641 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:47:21.918658 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:47:21.918675 | orchestrator | 2026-02-19 03:47:21.918705 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-19 03:47:21.918750 | orchestrator | Thursday 19 February 2026 03:47:13 +0000 (0:00:01.204) 0:10:40.153 ***** 2026-02-19 03:47:21.918779 | orchestrator | changed: [testbed-node-3] 2026-02-19 03:47:21.918798 | orchestrator | changed: [testbed-node-5] 2026-02-19 03:47:21.918815 | orchestrator | changed: [testbed-node-4] 2026-02-19 03:47:21.918831 | orchestrator | 2026-02-19 03:47:21.918847 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-19 03:47:21.918885 | orchestrator | Thursday 19 February 2026 03:47:15 +0000 (0:00:01.802) 0:10:41.956 ***** 2026-02-19 03:47:21.918919 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 03:47:21.918946 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 03:47:21.918967 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 03:47:21.918985 | orchestrator | 2026-02-19 03:47:21.919003 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 03:47:21.919020 | orchestrator | Thursday 19 February 2026 03:47:18 +0000 (0:00:03.469) 0:10:45.425 ***** 2026-02-19 03:47:21.919038 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:21.919057 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:21.919078 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:21.919104 | orchestrator | 2026-02-19 03:47:21.919123 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-19 03:47:21.919140 | orchestrator | Thursday 19 February 2026 03:47:18 +0000 (0:00:00.354) 0:10:45.779 ***** 2026-02-19 03:47:21.919159 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:47:21.919184 | orchestrator | 2026-02-19 03:47:21.919207 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-19 03:47:21.919225 | orchestrator | Thursday 19 February 2026 03:47:19 +0000 (0:00:00.918) 0:10:46.698 ***** 2026-02-19 03:47:21.919243 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:21.919261 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:21.919279 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:21.919297 | orchestrator | 2026-02-19 03:47:21.919327 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-19 03:47:21.919345 | orchestrator | Thursday 19 February 2026 03:47:20 +0000 (0:00:00.358) 0:10:47.056 ***** 2026-02-19 03:47:21.919363 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:21.919405 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:21.919425 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:21.919447 | orchestrator | 2026-02-19 03:47:21.919473 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-19 03:47:21.919492 | orchestrator | Thursday 19 February 2026 03:47:20 +0000 (0:00:00.377) 0:10:47.433 ***** 2026-02-19 03:47:21.919510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:47:21.919529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:47:21.919546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:47:21.919565 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:21.919582 | orchestrator | 2026-02-19 03:47:21.919599 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-19 03:47:21.919618 | orchestrator | Thursday 19 February 2026 03:47:21 +0000 (0:00:00.875) 0:10:48.309 ***** 2026-02-19 03:47:21.919658 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:21.919680 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:21.919699 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:21.919716 | orchestrator | 2026-02-19 03:47:21.919734 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:47:21.919752 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-19 03:47:21.919787 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-19 03:47:21.919812 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-19 03:47:21.919831 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-19 03:47:21.919859 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-19 03:47:21.919880 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-19 03:47:21.919898 | orchestrator | 2026-02-19 03:47:21.919917 | orchestrator | 2026-02-19 03:47:21.919936 | orchestrator | 2026-02-19 03:47:21.919954 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:47:21.919969 | orchestrator | Thursday 19 February 2026 03:47:21 +0000 (0:00:00.503) 0:10:48.812 ***** 2026-02-19 03:47:21.919986 | orchestrator | =============================================================================== 2026-02-19 03:47:21.920003 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 47.80s 2026-02-19 03:47:21.920021 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 44.30s 2026-02-19 03:47:21.920039 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.86s 2026-02-19 03:47:21.920073 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.39s 2026-02-19 03:47:22.352257 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.94s 2026-02-19 03:47:22.352370 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.30s 2026-02-19 03:47:22.352442 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.51s 2026-02-19 03:47:22.352457 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.06s 2026-02-19 03:47:22.352470 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.99s 2026-02-19 03:47:22.352483 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.59s 2026-02-19 03:47:22.352497 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.96s 2026-02-19 03:47:22.352508 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.51s 2026-02-19 03:47:22.352521 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.15s 2026-02-19 03:47:22.352532 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.67s 2026-02-19 03:47:22.352545 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.03s 2026-02-19 03:47:22.352558 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.02s 2026-02-19 03:47:22.352570 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.86s 2026-02-19 03:47:22.352581 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.85s 2026-02-19 03:47:22.352592 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.71s 2026-02-19 03:47:22.352603 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.52s 2026-02-19 03:47:24.921106 | orchestrator | 2026-02-19 03:47:24 | INFO  | Task 9c264293-0e6a-4711-8009-47c37cef09ba (ceph-pools) was prepared for execution. 2026-02-19 03:47:24.921201 | orchestrator | 2026-02-19 03:47:24 | INFO  | It takes a moment until task 9c264293-0e6a-4711-8009-47c37cef09ba (ceph-pools) has been started and output is visible here. 2026-02-19 03:47:39.802476 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-19 03:47:39.802583 | orchestrator | 2.16.14 2026-02-19 03:47:39.802601 | orchestrator | 2026-02-19 03:47:39.802613 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-19 03:47:39.802626 | orchestrator | 2026-02-19 03:47:39.802637 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 03:47:39.802649 | orchestrator | Thursday 19 February 2026 03:47:29 +0000 (0:00:00.626) 0:00:00.626 ***** 2026-02-19 03:47:39.802659 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:47:39.802671 | orchestrator | 2026-02-19 03:47:39.802682 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 03:47:39.802693 | orchestrator | Thursday 19 February 2026 03:47:30 +0000 (0:00:00.701) 0:00:01.328 ***** 2026-02-19 03:47:39.802704 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:39.802715 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:39.802725 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:39.802735 | orchestrator | 2026-02-19 03:47:39.802746 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 03:47:39.802756 | orchestrator | Thursday 19 February 2026 03:47:30 +0000 (0:00:00.672) 0:00:02.001 ***** 2026-02-19 03:47:39.802766 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:39.802777 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:39.802787 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:39.802797 | orchestrator | 2026-02-19 03:47:39.802808 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 03:47:39.802831 | orchestrator | Thursday 19 February 2026 03:47:31 +0000 (0:00:00.335) 0:00:02.336 ***** 2026-02-19 03:47:39.802838 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:39.802844 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:39.802850 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:39.802857 | orchestrator | 2026-02-19 03:47:39.802863 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 03:47:39.802869 | orchestrator | Thursday 19 February 2026 03:47:32 +0000 (0:00:00.875) 0:00:03.211 ***** 2026-02-19 03:47:39.802875 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:39.802881 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:39.802887 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:39.802893 | orchestrator | 2026-02-19 03:47:39.802899 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 03:47:39.802906 | orchestrator | Thursday 19 February 2026 03:47:32 +0000 (0:00:00.368) 0:00:03.580 ***** 2026-02-19 03:47:39.802912 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:39.802918 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:39.802924 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:39.802930 | orchestrator | 2026-02-19 03:47:39.802936 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 03:47:39.802944 | orchestrator | Thursday 19 February 2026 03:47:32 +0000 (0:00:00.314) 0:00:03.895 ***** 2026-02-19 03:47:39.802951 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:39.802958 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:39.802965 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:39.802972 | orchestrator | 2026-02-19 03:47:39.802979 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 03:47:39.802986 | orchestrator | Thursday 19 February 2026 03:47:33 +0000 (0:00:00.333) 0:00:04.228 ***** 2026-02-19 03:47:39.802993 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:39.803001 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:39.803030 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:39.803037 | orchestrator | 2026-02-19 03:47:39.803044 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 03:47:39.803051 | orchestrator | Thursday 19 February 2026 03:47:33 +0000 (0:00:00.636) 0:00:04.865 ***** 2026-02-19 03:47:39.803058 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:39.803066 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:39.803073 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:39.803080 | orchestrator | 2026-02-19 03:47:39.803086 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 03:47:39.803093 | orchestrator | Thursday 19 February 2026 03:47:34 +0000 (0:00:00.317) 0:00:05.182 ***** 2026-02-19 03:47:39.803101 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:47:39.803108 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:47:39.803115 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:47:39.803122 | orchestrator | 2026-02-19 03:47:39.803129 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 03:47:39.803136 | orchestrator | Thursday 19 February 2026 03:47:34 +0000 (0:00:00.732) 0:00:05.915 ***** 2026-02-19 03:47:39.803142 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:39.803150 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:39.803158 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:39.803170 | orchestrator | 2026-02-19 03:47:39.803180 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 03:47:39.803191 | orchestrator | Thursday 19 February 2026 03:47:35 +0000 (0:00:00.470) 0:00:06.386 ***** 2026-02-19 03:47:39.803201 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:47:39.803211 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:47:39.803222 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:47:39.803233 | orchestrator | 2026-02-19 03:47:39.803243 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 03:47:39.803253 | orchestrator | Thursday 19 February 2026 03:47:37 +0000 (0:00:02.348) 0:00:08.735 ***** 2026-02-19 03:47:39.803263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 03:47:39.803274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 03:47:39.803284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 03:47:39.803295 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:39.803305 | orchestrator | 2026-02-19 03:47:39.803334 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 03:47:39.803346 | orchestrator | Thursday 19 February 2026 03:47:38 +0000 (0:00:00.661) 0:00:09.397 ***** 2026-02-19 03:47:39.803359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 03:47:39.803372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 03:47:39.803414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 03:47:39.803426 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:39.803436 | orchestrator | 2026-02-19 03:47:39.803447 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 03:47:39.803458 | orchestrator | Thursday 19 February 2026 03:47:39 +0000 (0:00:01.066) 0:00:10.463 ***** 2026-02-19 03:47:39.803487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:39.803500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:39.803511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:39.803522 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:39.803532 | orchestrator | 2026-02-19 03:47:39.803543 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 03:47:39.803553 | orchestrator | Thursday 19 February 2026 03:47:39 +0000 (0:00:00.151) 0:00:10.615 ***** 2026-02-19 03:47:39.803566 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd0a6e5ab4aac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 03:47:36.309626', 'end': '2026-02-19 03:47:36.348188', 'delta': '0:00:00.038562', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0a6e5ab4aac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 03:47:39.803582 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a8e499fc5d9a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 03:47:36.940603', 'end': '2026-02-19 03:47:36.977901', 'delta': '0:00:00.037298', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8e499fc5d9a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 03:47:39.803602 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7f7671ec0784', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 03:47:37.514179', 'end': '2026-02-19 03:47:37.563314', 'delta': '0:00:00.049135', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7f7671ec0784'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 03:47:46.804983 | orchestrator | 2026-02-19 03:47:46.805103 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 03:47:46.805115 | orchestrator | Thursday 19 February 2026 03:47:39 +0000 (0:00:00.187) 0:00:10.802 ***** 2026-02-19 03:47:46.805124 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:46.805133 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:46.805141 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:46.805149 | orchestrator | 2026-02-19 03:47:46.805157 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 03:47:46.805185 | orchestrator | Thursday 19 February 2026 03:47:40 +0000 (0:00:00.463) 0:00:11.266 ***** 2026-02-19 03:47:46.805207 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-19 03:47:46.805223 | orchestrator | 2026-02-19 03:47:46.805236 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 03:47:46.805249 | orchestrator | Thursday 19 February 2026 03:47:42 +0000 (0:00:01.828) 0:00:13.094 ***** 2026-02-19 03:47:46.805262 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805274 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.805286 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.805298 | orchestrator | 2026-02-19 03:47:46.805310 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 03:47:46.805322 | orchestrator | Thursday 19 February 2026 03:47:42 +0000 (0:00:00.307) 0:00:13.401 ***** 2026-02-19 03:47:46.805334 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805348 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.805361 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.805374 | orchestrator | 2026-02-19 03:47:46.805497 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 03:47:46.805516 | orchestrator | Thursday 19 February 2026 03:47:43 +0000 (0:00:00.814) 0:00:14.216 ***** 2026-02-19 03:47:46.805529 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805542 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.805556 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.805569 | orchestrator | 2026-02-19 03:47:46.805581 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 03:47:46.805594 | orchestrator | Thursday 19 February 2026 03:47:43 +0000 (0:00:00.322) 0:00:14.538 ***** 2026-02-19 03:47:46.805607 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:46.805620 | orchestrator | 2026-02-19 03:47:46.805635 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 03:47:46.805650 | orchestrator | Thursday 19 February 2026 03:47:43 +0000 (0:00:00.127) 0:00:14.666 ***** 2026-02-19 03:47:46.805664 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805677 | orchestrator | 2026-02-19 03:47:46.805691 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 03:47:46.805706 | orchestrator | Thursday 19 February 2026 03:47:43 +0000 (0:00:00.237) 0:00:14.903 ***** 2026-02-19 03:47:46.805720 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805733 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.805747 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.805755 | orchestrator | 2026-02-19 03:47:46.805765 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 03:47:46.805774 | orchestrator | Thursday 19 February 2026 03:47:44 +0000 (0:00:00.278) 0:00:15.182 ***** 2026-02-19 03:47:46.805783 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805791 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.805801 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.805810 | orchestrator | 2026-02-19 03:47:46.805819 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 03:47:46.805828 | orchestrator | Thursday 19 February 2026 03:47:44 +0000 (0:00:00.309) 0:00:15.491 ***** 2026-02-19 03:47:46.805837 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805846 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.805855 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.805864 | orchestrator | 2026-02-19 03:47:46.805884 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 03:47:46.805893 | orchestrator | Thursday 19 February 2026 03:47:45 +0000 (0:00:00.561) 0:00:16.053 ***** 2026-02-19 03:47:46.805902 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805911 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.805920 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.805928 | orchestrator | 2026-02-19 03:47:46.805936 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 03:47:46.805944 | orchestrator | Thursday 19 February 2026 03:47:45 +0000 (0:00:00.333) 0:00:16.387 ***** 2026-02-19 03:47:46.805955 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.805970 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.806009 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.806080 | orchestrator | 2026-02-19 03:47:46.806094 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 03:47:46.806106 | orchestrator | Thursday 19 February 2026 03:47:45 +0000 (0:00:00.329) 0:00:16.716 ***** 2026-02-19 03:47:46.806117 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.806130 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.806142 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.806155 | orchestrator | 2026-02-19 03:47:46.806168 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 03:47:46.806180 | orchestrator | Thursday 19 February 2026 03:47:46 +0000 (0:00:00.516) 0:00:17.232 ***** 2026-02-19 03:47:46.806193 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:46.806206 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:46.806218 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:46.806231 | orchestrator | 2026-02-19 03:47:46.806243 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 03:47:46.806255 | orchestrator | Thursday 19 February 2026 03:47:46 +0000 (0:00:00.351) 0:00:17.584 ***** 2026-02-19 03:47:46.806295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.806460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.876683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:46.876788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:46.876800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:46.876829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.876842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:46.876855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.876884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:46.876898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.876914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.876925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.876937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:46.876957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.009325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.009492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.009520 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:47.009527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.009535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.009553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.009564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.009573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.009578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.009583 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:47.009587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.009592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.009597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.009605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.309067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.309162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.309169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.309174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.309178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.309183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-19 03:47:47.309206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.309219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.309225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.309230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.309236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-19 03:47:47.309242 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:47.309248 | orchestrator | 2026-02-19 03:47:47.309254 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 03:47:47.309262 | orchestrator | Thursday 19 February 2026 03:47:47 +0000 (0:00:00.623) 0:00:18.207 ***** 2026-02-19 03:47:47.309280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411803 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411869 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411877 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.411933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.531954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532048 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532145 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532182 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532202 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.532220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754052 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754133 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754171 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754188 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754193 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:47.754199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754205 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754219 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:47.754229 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.754251 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896669 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896762 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896773 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896822 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896862 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:47.896896 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:58.273489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:58.273588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-19-02-28-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-19 03:47:58.273625 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:58.273633 | orchestrator | 2026-02-19 03:47:58.273640 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 03:47:58.273647 | orchestrator | Thursday 19 February 2026 03:47:47 +0000 (0:00:00.693) 0:00:18.901 ***** 2026-02-19 03:47:58.273652 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:58.273658 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:58.273663 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:58.273668 | orchestrator | 2026-02-19 03:47:58.273673 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 03:47:58.273679 | orchestrator | Thursday 19 February 2026 03:47:48 +0000 (0:00:00.864) 0:00:19.766 ***** 2026-02-19 03:47:58.273684 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:58.273689 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:58.273694 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:58.273699 | orchestrator | 2026-02-19 03:47:58.273704 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 03:47:58.273709 | orchestrator | Thursday 19 February 2026 03:47:49 +0000 (0:00:00.314) 0:00:20.080 ***** 2026-02-19 03:47:58.273726 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:58.273731 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:58.273736 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:58.273741 | orchestrator | 2026-02-19 03:47:58.273746 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 03:47:58.273751 | orchestrator | Thursday 19 February 2026 03:47:49 +0000 (0:00:00.685) 0:00:20.766 ***** 2026-02-19 03:47:58.273756 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.273761 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:58.273766 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:58.273771 | orchestrator | 2026-02-19 03:47:58.273776 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 03:47:58.273781 | orchestrator | Thursday 19 February 2026 03:47:50 +0000 (0:00:00.325) 0:00:21.091 ***** 2026-02-19 03:47:58.273786 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.273791 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:58.273796 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:58.273812 | orchestrator | 2026-02-19 03:47:58.273824 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 03:47:58.273829 | orchestrator | Thursday 19 February 2026 03:47:50 +0000 (0:00:00.724) 0:00:21.816 ***** 2026-02-19 03:47:58.273834 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.273839 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:58.273844 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:58.273849 | orchestrator | 2026-02-19 03:47:58.273854 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 03:47:58.273859 | orchestrator | Thursday 19 February 2026 03:47:51 +0000 (0:00:00.333) 0:00:22.149 ***** 2026-02-19 03:47:58.273866 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-19 03:47:58.273875 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-19 03:47:58.273883 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-19 03:47:58.273892 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-19 03:47:58.273900 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-19 03:47:58.273917 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-19 03:47:58.273926 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-19 03:47:58.273933 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-19 03:47:58.273939 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-19 03:47:58.273944 | orchestrator | 2026-02-19 03:47:58.273949 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 03:47:58.273955 | orchestrator | Thursday 19 February 2026 03:47:52 +0000 (0:00:01.072) 0:00:23.222 ***** 2026-02-19 03:47:58.273972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 03:47:58.273978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 03:47:58.273983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 03:47:58.273988 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.273993 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 03:47:58.273998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 03:47:58.274004 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 03:47:58.274009 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:58.274014 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 03:47:58.274061 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 03:47:58.274066 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 03:47:58.274071 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:58.274076 | orchestrator | 2026-02-19 03:47:58.274081 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 03:47:58.274107 | orchestrator | Thursday 19 February 2026 03:47:52 +0000 (0:00:00.373) 0:00:23.595 ***** 2026-02-19 03:47:58.274114 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:47:58.274119 | orchestrator | 2026-02-19 03:47:58.274125 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 03:47:58.274131 | orchestrator | Thursday 19 February 2026 03:47:53 +0000 (0:00:00.738) 0:00:24.334 ***** 2026-02-19 03:47:58.274136 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.274141 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:58.274146 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:58.274151 | orchestrator | 2026-02-19 03:47:58.274156 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 03:47:58.274161 | orchestrator | Thursday 19 February 2026 03:47:53 +0000 (0:00:00.327) 0:00:24.661 ***** 2026-02-19 03:47:58.274166 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.274171 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:58.274176 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:58.274181 | orchestrator | 2026-02-19 03:47:58.274186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 03:47:58.274191 | orchestrator | Thursday 19 February 2026 03:47:53 +0000 (0:00:00.330) 0:00:24.992 ***** 2026-02-19 03:47:58.274196 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.274201 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:47:58.274208 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:47:58.274217 | orchestrator | 2026-02-19 03:47:58.274226 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 03:47:58.274234 | orchestrator | Thursday 19 February 2026 03:47:54 +0000 (0:00:00.553) 0:00:25.545 ***** 2026-02-19 03:47:58.274242 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:58.274251 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:58.274259 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:58.274267 | orchestrator | 2026-02-19 03:47:58.274275 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 03:47:58.274283 | orchestrator | Thursday 19 February 2026 03:47:54 +0000 (0:00:00.435) 0:00:25.981 ***** 2026-02-19 03:47:58.274304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:47:58.274323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:47:58.274331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:47:58.274337 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.274342 | orchestrator | 2026-02-19 03:47:58.274347 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 03:47:58.274352 | orchestrator | Thursday 19 February 2026 03:47:55 +0000 (0:00:00.391) 0:00:26.373 ***** 2026-02-19 03:47:58.274357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:47:58.274362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:47:58.274367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:47:58.274372 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.274377 | orchestrator | 2026-02-19 03:47:58.274382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 03:47:58.274408 | orchestrator | Thursday 19 February 2026 03:47:55 +0000 (0:00:00.397) 0:00:26.770 ***** 2026-02-19 03:47:58.274416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 03:47:58.274421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 03:47:58.274426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 03:47:58.274431 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:47:58.274435 | orchestrator | 2026-02-19 03:47:58.274440 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 03:47:58.274445 | orchestrator | Thursday 19 February 2026 03:47:56 +0000 (0:00:00.415) 0:00:27.186 ***** 2026-02-19 03:47:58.274450 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:47:58.274455 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:47:58.274460 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:47:58.274465 | orchestrator | 2026-02-19 03:47:58.274470 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 03:47:58.274475 | orchestrator | Thursday 19 February 2026 03:47:56 +0000 (0:00:00.339) 0:00:27.525 ***** 2026-02-19 03:47:58.274480 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 03:47:58.274485 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 03:47:58.274490 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 03:47:58.274495 | orchestrator | 2026-02-19 03:47:58.274500 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 03:47:58.274505 | orchestrator | Thursday 19 February 2026 03:47:57 +0000 (0:00:00.854) 0:00:28.380 ***** 2026-02-19 03:47:58.274510 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:47:58.274522 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:49:41.076080 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:49:41.076167 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 03:49:41.076174 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 03:49:41.076179 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 03:49:41.076183 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 03:49:41.076187 | orchestrator | 2026-02-19 03:49:41.076193 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 03:49:41.076198 | orchestrator | Thursday 19 February 2026 03:47:58 +0000 (0:00:00.889) 0:00:29.270 ***** 2026-02-19 03:49:41.076201 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 03:49:41.076205 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 03:49:41.076209 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 03:49:41.076228 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 03:49:41.076232 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 03:49:41.076236 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 03:49:41.076240 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 03:49:41.076243 | orchestrator | 2026-02-19 03:49:41.076247 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-19 03:49:41.076251 | orchestrator | Thursday 19 February 2026 03:47:59 +0000 (0:00:01.659) 0:00:30.929 ***** 2026-02-19 03:49:41.076255 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:49:41.076260 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:49:41.076264 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-19 03:49:41.076267 | orchestrator | 2026-02-19 03:49:41.076271 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-19 03:49:41.076275 | orchestrator | Thursday 19 February 2026 03:48:00 +0000 (0:00:00.404) 0:00:31.334 ***** 2026-02-19 03:49:41.076280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-19 03:49:41.076297 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-19 03:49:41.076301 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-19 03:49:41.076305 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-19 03:49:41.076309 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-19 03:49:41.076313 | orchestrator | 2026-02-19 03:49:41.076316 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-19 03:49:41.076320 | orchestrator | Thursday 19 February 2026 03:48:46 +0000 (0:00:46.061) 0:01:17.395 ***** 2026-02-19 03:49:41.076324 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076328 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076331 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076335 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076339 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076347 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-19 03:49:41.076350 | orchestrator | 2026-02-19 03:49:41.076354 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-19 03:49:41.076358 | orchestrator | Thursday 19 February 2026 03:49:10 +0000 (0:00:24.558) 0:01:41.954 ***** 2026-02-19 03:49:41.076373 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076378 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076381 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076385 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076389 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076392 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076396 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 03:49:41.076400 | orchestrator | 2026-02-19 03:49:41.076403 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-19 03:49:41.076456 | orchestrator | Thursday 19 February 2026 03:49:23 +0000 (0:00:12.237) 0:01:54.191 ***** 2026-02-19 03:49:41.076463 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076469 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 03:49:41.076475 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 03:49:41.076481 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076487 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 03:49:41.076492 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 03:49:41.076498 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076503 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 03:49:41.076509 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 03:49:41.076515 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076521 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 03:49:41.076527 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 03:49:41.076533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076539 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 03:49:41.076544 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 03:49:41.076550 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 03:49:41.076556 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 03:49:41.076561 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 03:49:41.076574 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-19 03:49:41.076578 | orchestrator | 2026-02-19 03:49:41.076582 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:49:41.076586 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-19 03:49:41.076592 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-19 03:49:41.076596 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-19 03:49:41.076600 | orchestrator | 2026-02-19 03:49:41.076604 | orchestrator | 2026-02-19 03:49:41.076608 | orchestrator | 2026-02-19 03:49:41.076611 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:49:41.076620 | orchestrator | Thursday 19 February 2026 03:49:41 +0000 (0:00:17.860) 0:02:12.052 ***** 2026-02-19 03:49:41.076624 | orchestrator | =============================================================================== 2026-02-19 03:49:41.076627 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.06s 2026-02-19 03:49:41.076631 | orchestrator | generate keys ---------------------------------------------------------- 24.56s 2026-02-19 03:49:41.076635 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.86s 2026-02-19 03:49:41.076638 | orchestrator | get keys from monitors ------------------------------------------------- 12.24s 2026-02-19 03:49:41.076642 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.35s 2026-02-19 03:49:41.076647 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.83s 2026-02-19 03:49:41.076651 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.66s 2026-02-19 03:49:41.076655 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.07s 2026-02-19 03:49:41.076660 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.07s 2026-02-19 03:49:41.076664 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.89s 2026-02-19 03:49:41.076669 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.88s 2026-02-19 03:49:41.076673 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.87s 2026-02-19 03:49:41.076677 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.85s 2026-02-19 03:49:41.076687 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.81s 2026-02-19 03:49:41.393999 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2026-02-19 03:49:41.394129 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2026-02-19 03:49:41.394140 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.72s 2026-02-19 03:49:41.394148 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.70s 2026-02-19 03:49:41.394156 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.69s 2026-02-19 03:49:41.394164 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2026-02-19 03:49:43.732706 | orchestrator | 2026-02-19 03:49:43 | INFO  | Task feef76f6-b6b4-47dc-b02b-1284ff8ff385 (copy-ceph-keys) was prepared for execution. 2026-02-19 03:49:43.732803 | orchestrator | 2026-02-19 03:49:43 | INFO  | It takes a moment until task feef76f6-b6b4-47dc-b02b-1284ff8ff385 (copy-ceph-keys) has been started and output is visible here. 2026-02-19 03:50:22.256846 | orchestrator | 2026-02-19 03:50:22.256946 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-19 03:50:22.256963 | orchestrator | 2026-02-19 03:50:22.256975 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-19 03:50:22.256987 | orchestrator | Thursday 19 February 2026 03:49:47 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-02-19 03:50:22.256998 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-19 03:50:22.257011 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257021 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257028 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-19 03:50:22.257035 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257042 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-19 03:50:22.257049 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-19 03:50:22.257073 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-19 03:50:22.257080 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-19 03:50:22.257087 | orchestrator | 2026-02-19 03:50:22.257094 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-19 03:50:22.257101 | orchestrator | Thursday 19 February 2026 03:49:52 +0000 (0:00:04.770) 0:00:04.926 ***** 2026-02-19 03:50:22.257125 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-19 03:50:22.257143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257155 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257165 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-19 03:50:22.257175 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257185 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-19 03:50:22.257195 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-19 03:50:22.257205 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-19 03:50:22.257215 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-19 03:50:22.257225 | orchestrator | 2026-02-19 03:50:22.257235 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-19 03:50:22.257246 | orchestrator | Thursday 19 February 2026 03:49:56 +0000 (0:00:04.345) 0:00:09.272 ***** 2026-02-19 03:50:22.257257 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-19 03:50:22.257269 | orchestrator | 2026-02-19 03:50:22.257280 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-19 03:50:22.257291 | orchestrator | Thursday 19 February 2026 03:49:57 +0000 (0:00:01.004) 0:00:10.277 ***** 2026-02-19 03:50:22.257299 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-19 03:50:22.257306 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257313 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257320 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-19 03:50:22.257326 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257333 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-19 03:50:22.257340 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-19 03:50:22.257346 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-19 03:50:22.257353 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-19 03:50:22.257359 | orchestrator | 2026-02-19 03:50:22.257366 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-19 03:50:22.257372 | orchestrator | Thursday 19 February 2026 03:50:11 +0000 (0:00:13.549) 0:00:23.827 ***** 2026-02-19 03:50:22.257382 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-19 03:50:22.257393 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-19 03:50:22.257405 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-19 03:50:22.257443 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-19 03:50:22.257479 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-19 03:50:22.257492 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-19 03:50:22.257504 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-19 03:50:22.257515 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-19 03:50:22.257528 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-19 03:50:22.257538 | orchestrator | 2026-02-19 03:50:22.257546 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-19 03:50:22.257554 | orchestrator | Thursday 19 February 2026 03:50:14 +0000 (0:00:03.203) 0:00:27.030 ***** 2026-02-19 03:50:22.257562 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-19 03:50:22.257569 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257576 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257582 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-19 03:50:22.257589 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-19 03:50:22.257595 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-19 03:50:22.257602 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-19 03:50:22.257608 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-19 03:50:22.257614 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-19 03:50:22.257622 | orchestrator | 2026-02-19 03:50:22.257634 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:50:22.257641 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:50:22.257649 | orchestrator | 2026-02-19 03:50:22.257673 | orchestrator | 2026-02-19 03:50:22.257694 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:50:22.257706 | orchestrator | Thursday 19 February 2026 03:50:21 +0000 (0:00:07.180) 0:00:34.210 ***** 2026-02-19 03:50:22.257729 | orchestrator | =============================================================================== 2026-02-19 03:50:22.257751 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.55s 2026-02-19 03:50:22.257770 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.18s 2026-02-19 03:50:22.257781 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.77s 2026-02-19 03:50:22.257792 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.35s 2026-02-19 03:50:22.257802 | orchestrator | Check if target directories exist --------------------------------------- 3.20s 2026-02-19 03:50:22.257812 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-02-19 03:50:34.721719 | orchestrator | 2026-02-19 03:50:34 | INFO  | Task 4daae832-7dff-4381-88b5-bee469c29eef (cephclient) was prepared for execution. 2026-02-19 03:50:34.721824 | orchestrator | 2026-02-19 03:50:34 | INFO  | It takes a moment until task 4daae832-7dff-4381-88b5-bee469c29eef (cephclient) has been started and output is visible here. 2026-02-19 03:51:38.009040 | orchestrator | 2026-02-19 03:51:38.009168 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-19 03:51:38.009186 | orchestrator | 2026-02-19 03:51:38.009197 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-19 03:51:38.009208 | orchestrator | Thursday 19 February 2026 03:50:39 +0000 (0:00:00.235) 0:00:00.235 ***** 2026-02-19 03:51:38.009219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-19 03:51:38.009256 | orchestrator | 2026-02-19 03:51:38.009267 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-19 03:51:38.009276 | orchestrator | Thursday 19 February 2026 03:50:39 +0000 (0:00:00.233) 0:00:00.468 ***** 2026-02-19 03:51:38.009282 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-19 03:51:38.009289 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-19 03:51:38.009297 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-19 03:51:38.009303 | orchestrator | 2026-02-19 03:51:38.009309 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-19 03:51:38.009316 | orchestrator | Thursday 19 February 2026 03:50:40 +0000 (0:00:01.253) 0:00:01.722 ***** 2026-02-19 03:51:38.009323 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-19 03:51:38.009329 | orchestrator | 2026-02-19 03:51:38.009335 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-19 03:51:38.009341 | orchestrator | Thursday 19 February 2026 03:50:42 +0000 (0:00:01.518) 0:00:03.241 ***** 2026-02-19 03:51:38.009348 | orchestrator | changed: [testbed-manager] 2026-02-19 03:51:38.009354 | orchestrator | 2026-02-19 03:51:38.009361 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-19 03:51:38.009367 | orchestrator | Thursday 19 February 2026 03:50:43 +0000 (0:00:00.968) 0:00:04.209 ***** 2026-02-19 03:51:38.009373 | orchestrator | changed: [testbed-manager] 2026-02-19 03:51:38.009379 | orchestrator | 2026-02-19 03:51:38.009385 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-19 03:51:38.009391 | orchestrator | Thursday 19 February 2026 03:50:43 +0000 (0:00:00.964) 0:00:05.173 ***** 2026-02-19 03:51:38.009397 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-19 03:51:38.009416 | orchestrator | ok: [testbed-manager] 2026-02-19 03:51:38.009422 | orchestrator | 2026-02-19 03:51:38.009481 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-19 03:51:38.009492 | orchestrator | Thursday 19 February 2026 03:51:27 +0000 (0:00:43.274) 0:00:48.448 ***** 2026-02-19 03:51:38.009502 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-19 03:51:38.009514 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-19 03:51:38.009525 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-19 03:51:38.009536 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-19 03:51:38.009546 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-19 03:51:38.009556 | orchestrator | 2026-02-19 03:51:38.009565 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-19 03:51:38.009572 | orchestrator | Thursday 19 February 2026 03:51:31 +0000 (0:00:04.352) 0:00:52.801 ***** 2026-02-19 03:51:38.009578 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-19 03:51:38.009586 | orchestrator | 2026-02-19 03:51:38.009596 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-19 03:51:38.009607 | orchestrator | Thursday 19 February 2026 03:51:32 +0000 (0:00:00.477) 0:00:53.279 ***** 2026-02-19 03:51:38.009616 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:51:38.009626 | orchestrator | 2026-02-19 03:51:38.009637 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-19 03:51:38.009646 | orchestrator | Thursday 19 February 2026 03:51:32 +0000 (0:00:00.141) 0:00:53.420 ***** 2026-02-19 03:51:38.009656 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:51:38.009667 | orchestrator | 2026-02-19 03:51:38.009695 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-19 03:51:38.009707 | orchestrator | Thursday 19 February 2026 03:51:32 +0000 (0:00:00.626) 0:00:54.047 ***** 2026-02-19 03:51:38.009718 | orchestrator | changed: [testbed-manager] 2026-02-19 03:51:38.009746 | orchestrator | 2026-02-19 03:51:38.009754 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-19 03:51:38.009761 | orchestrator | Thursday 19 February 2026 03:51:34 +0000 (0:00:01.592) 0:00:55.639 ***** 2026-02-19 03:51:38.009768 | orchestrator | changed: [testbed-manager] 2026-02-19 03:51:38.009775 | orchestrator | 2026-02-19 03:51:38.009783 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-19 03:51:38.009790 | orchestrator | Thursday 19 February 2026 03:51:35 +0000 (0:00:00.801) 0:00:56.441 ***** 2026-02-19 03:51:38.009798 | orchestrator | changed: [testbed-manager] 2026-02-19 03:51:38.009807 | orchestrator | 2026-02-19 03:51:38.009817 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-19 03:51:38.009828 | orchestrator | Thursday 19 February 2026 03:51:35 +0000 (0:00:00.614) 0:00:57.056 ***** 2026-02-19 03:51:38.009838 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-19 03:51:38.009849 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-19 03:51:38.009860 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-19 03:51:38.009870 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-19 03:51:38.009882 | orchestrator | 2026-02-19 03:51:38.009893 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:51:38.009905 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 03:51:38.009917 | orchestrator | 2026-02-19 03:51:38.009927 | orchestrator | 2026-02-19 03:51:38.009956 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:51:38.009964 | orchestrator | Thursday 19 February 2026 03:51:37 +0000 (0:00:01.626) 0:00:58.682 ***** 2026-02-19 03:51:38.009970 | orchestrator | =============================================================================== 2026-02-19 03:51:38.009976 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.27s 2026-02-19 03:51:38.009982 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.35s 2026-02-19 03:51:38.009988 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.63s 2026-02-19 03:51:38.009994 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.59s 2026-02-19 03:51:38.010001 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.52s 2026-02-19 03:51:38.010007 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2026-02-19 03:51:38.010065 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-02-19 03:51:38.010074 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2026-02-19 03:51:38.010080 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-02-19 03:51:38.010086 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.63s 2026-02-19 03:51:38.010092 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-02-19 03:51:38.010099 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-02-19 03:51:38.010105 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-02-19 03:51:38.010111 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-02-19 03:51:40.625704 | orchestrator | 2026-02-19 03:51:40 | INFO  | Task 63bb7168-d8b6-4371-b068-d076d9e16dcd (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-19 03:51:40.625785 | orchestrator | 2026-02-19 03:51:40 | INFO  | It takes a moment until task 63bb7168-d8b6-4371-b068-d076d9e16dcd (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-19 03:53:15.880540 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-19 03:53:15.880645 | orchestrator | 2.16.14 2026-02-19 03:53:15.880657 | orchestrator | 2026-02-19 03:53:15.880687 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-19 03:53:15.880695 | orchestrator | 2026-02-19 03:53:15.880702 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-19 03:53:15.880709 | orchestrator | Thursday 19 February 2026 03:51:45 +0000 (0:00:00.333) 0:00:00.333 ***** 2026-02-19 03:53:15.880715 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880723 | orchestrator | 2026-02-19 03:53:15.880730 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-19 03:53:15.880736 | orchestrator | Thursday 19 February 2026 03:51:46 +0000 (0:00:01.491) 0:00:01.824 ***** 2026-02-19 03:53:15.880742 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880749 | orchestrator | 2026-02-19 03:53:15.880755 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-19 03:53:15.880761 | orchestrator | Thursday 19 February 2026 03:51:47 +0000 (0:00:01.050) 0:00:02.874 ***** 2026-02-19 03:53:15.880767 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880774 | orchestrator | 2026-02-19 03:53:15.880780 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-19 03:53:15.880786 | orchestrator | Thursday 19 February 2026 03:51:48 +0000 (0:00:01.074) 0:00:03.949 ***** 2026-02-19 03:53:15.880792 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880798 | orchestrator | 2026-02-19 03:53:15.880805 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-19 03:53:15.880811 | orchestrator | Thursday 19 February 2026 03:51:50 +0000 (0:00:01.135) 0:00:05.084 ***** 2026-02-19 03:53:15.880817 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880823 | orchestrator | 2026-02-19 03:53:15.880846 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-19 03:53:15.880852 | orchestrator | Thursday 19 February 2026 03:51:51 +0000 (0:00:00.980) 0:00:06.065 ***** 2026-02-19 03:53:15.880858 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880865 | orchestrator | 2026-02-19 03:53:15.880871 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-19 03:53:15.880878 | orchestrator | Thursday 19 February 2026 03:51:52 +0000 (0:00:00.986) 0:00:07.052 ***** 2026-02-19 03:53:15.880884 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880890 | orchestrator | 2026-02-19 03:53:15.880897 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-19 03:53:15.880903 | orchestrator | Thursday 19 February 2026 03:51:53 +0000 (0:00:01.254) 0:00:08.307 ***** 2026-02-19 03:53:15.880909 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880915 | orchestrator | 2026-02-19 03:53:15.880921 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-19 03:53:15.880928 | orchestrator | Thursday 19 February 2026 03:51:54 +0000 (0:00:01.202) 0:00:09.509 ***** 2026-02-19 03:53:15.880934 | orchestrator | changed: [testbed-manager] 2026-02-19 03:53:15.880940 | orchestrator | 2026-02-19 03:53:15.880946 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-19 03:53:15.880952 | orchestrator | Thursday 19 February 2026 03:52:50 +0000 (0:00:56.268) 0:01:05.778 ***** 2026-02-19 03:53:15.880958 | orchestrator | skipping: [testbed-manager] 2026-02-19 03:53:15.880965 | orchestrator | 2026-02-19 03:53:15.880971 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-19 03:53:15.880977 | orchestrator | 2026-02-19 03:53:15.880984 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-19 03:53:15.880990 | orchestrator | Thursday 19 February 2026 03:52:50 +0000 (0:00:00.172) 0:01:05.950 ***** 2026-02-19 03:53:15.880996 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:53:15.881003 | orchestrator | 2026-02-19 03:53:15.881009 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-19 03:53:15.881015 | orchestrator | 2026-02-19 03:53:15.881021 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-19 03:53:15.881027 | orchestrator | Thursday 19 February 2026 03:53:02 +0000 (0:00:11.912) 0:01:17.862 ***** 2026-02-19 03:53:15.881043 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:53:15.881055 | orchestrator | 2026-02-19 03:53:15.881068 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-19 03:53:15.881080 | orchestrator | 2026-02-19 03:53:15.881092 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-19 03:53:15.881106 | orchestrator | Thursday 19 February 2026 03:53:14 +0000 (0:00:11.302) 0:01:29.165 ***** 2026-02-19 03:53:15.881119 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:53:15.881131 | orchestrator | 2026-02-19 03:53:15.881138 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:53:15.881146 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 03:53:15.881153 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:53:15.881160 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:53:15.881166 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 03:53:15.881172 | orchestrator | 2026-02-19 03:53:15.881177 | orchestrator | 2026-02-19 03:53:15.881183 | orchestrator | 2026-02-19 03:53:15.881189 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:53:15.881195 | orchestrator | Thursday 19 February 2026 03:53:15 +0000 (0:00:01.350) 0:01:30.515 ***** 2026-02-19 03:53:15.881202 | orchestrator | =============================================================================== 2026-02-19 03:53:15.881207 | orchestrator | Create admin user ------------------------------------------------------ 56.27s 2026-02-19 03:53:15.881233 | orchestrator | Restart ceph manager service ------------------------------------------- 24.56s 2026-02-19 03:53:15.881240 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.49s 2026-02-19 03:53:15.881246 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.25s 2026-02-19 03:53:15.881252 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.20s 2026-02-19 03:53:15.881257 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.14s 2026-02-19 03:53:15.881263 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2026-02-19 03:53:15.881269 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.05s 2026-02-19 03:53:15.881275 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.99s 2026-02-19 03:53:15.881280 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.98s 2026-02-19 03:53:15.881286 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-02-19 03:53:16.190060 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-19 03:53:18.249909 | orchestrator | 2026-02-19 03:53:18 | INFO  | Task bd625af6-d3bd-4d2a-8956-6ae5f8d3918c (keystone) was prepared for execution. 2026-02-19 03:53:18.250005 | orchestrator | 2026-02-19 03:53:18 | INFO  | It takes a moment until task bd625af6-d3bd-4d2a-8956-6ae5f8d3918c (keystone) has been started and output is visible here. 2026-02-19 03:53:25.783270 | orchestrator | 2026-02-19 03:53:25.783377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:53:25.783388 | orchestrator | 2026-02-19 03:53:25.783395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:53:25.783452 | orchestrator | Thursday 19 February 2026 03:53:22 +0000 (0:00:00.254) 0:00:00.254 ***** 2026-02-19 03:53:25.783459 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:53:25.783467 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:53:25.783473 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:53:25.783480 | orchestrator | 2026-02-19 03:53:25.783505 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:53:25.783512 | orchestrator | Thursday 19 February 2026 03:53:22 +0000 (0:00:00.303) 0:00:00.558 ***** 2026-02-19 03:53:25.783519 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-19 03:53:25.783526 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-19 03:53:25.783533 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-19 03:53:25.783541 | orchestrator | 2026-02-19 03:53:25.783548 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-19 03:53:25.783556 | orchestrator | 2026-02-19 03:53:25.783563 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-19 03:53:25.783571 | orchestrator | Thursday 19 February 2026 03:53:23 +0000 (0:00:00.487) 0:00:01.046 ***** 2026-02-19 03:53:25.783579 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:53:25.783588 | orchestrator | 2026-02-19 03:53:25.783595 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-19 03:53:25.783603 | orchestrator | Thursday 19 February 2026 03:53:23 +0000 (0:00:00.626) 0:00:01.672 ***** 2026-02-19 03:53:25.783615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:25.783626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:25.783656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:25.783670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:25.783679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:25.783686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:25.783693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:25.783701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:25.783708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:25.783720 | orchestrator | 2026-02-19 03:53:25.783727 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-19 03:53:25.783739 | orchestrator | Thursday 19 February 2026 03:53:25 +0000 (0:00:02.016) 0:00:03.689 ***** 2026-02-19 03:53:32.059002 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:53:32.059138 | orchestrator | 2026-02-19 03:53:32.059181 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-19 03:53:32.059200 | orchestrator | Thursday 19 February 2026 03:53:26 +0000 (0:00:00.302) 0:00:03.991 ***** 2026-02-19 03:53:32.059217 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:53:32.059234 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:53:32.059250 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:53:32.059266 | orchestrator | 2026-02-19 03:53:32.059282 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-19 03:53:32.059299 | orchestrator | Thursday 19 February 2026 03:53:26 +0000 (0:00:00.336) 0:00:04.328 ***** 2026-02-19 03:53:32.059315 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 03:53:32.059332 | orchestrator | 2026-02-19 03:53:32.059350 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-19 03:53:32.059365 | orchestrator | Thursday 19 February 2026 03:53:27 +0000 (0:00:00.911) 0:00:05.240 ***** 2026-02-19 03:53:32.059434 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:53:32.059447 | orchestrator | 2026-02-19 03:53:32.059457 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-19 03:53:32.059467 | orchestrator | Thursday 19 February 2026 03:53:27 +0000 (0:00:00.610) 0:00:05.851 ***** 2026-02-19 03:53:32.059483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:32.059499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:32.059511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:32.059571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:32.059588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:32.059606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:32.059622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:32.059639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:32.059667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:32.059685 | orchestrator | 2026-02-19 03:53:32.059703 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-19 03:53:32.059719 | orchestrator | Thursday 19 February 2026 03:53:31 +0000 (0:00:03.513) 0:00:09.364 ***** 2026-02-19 03:53:32.059749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:53:32.905917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:32.906145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:53:32.906168 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:53:32.906183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:53:32.906213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:32.906227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:53:32.906236 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:53:32.906265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:53:32.906275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:32.906284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:53:32.906299 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:53:32.906308 | orchestrator | 2026-02-19 03:53:32.906318 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-19 03:53:32.906328 | orchestrator | Thursday 19 February 2026 03:53:32 +0000 (0:00:00.607) 0:00:09.972 ***** 2026-02-19 03:53:32.906338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:53:32.906352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:32.906370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:53:36.615955 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:53:36.616043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:53:36.616062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:36.616099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:53:36.616111 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:53:36.616136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:53:36.616149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:36.616174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:53:36.616182 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:53:36.616188 | orchestrator | 2026-02-19 03:53:36.616196 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-19 03:53:36.616203 | orchestrator | Thursday 19 February 2026 03:53:32 +0000 (0:00:00.844) 0:00:10.816 ***** 2026-02-19 03:53:36.616210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:36.616225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:36.616238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:36.616251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:41.627147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:41.627268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:53:41.627281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:41.627289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:41.627312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:41.627321 | orchestrator | 2026-02-19 03:53:41.627329 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-19 03:53:41.627337 | orchestrator | Thursday 19 February 2026 03:53:36 +0000 (0:00:03.712) 0:00:14.529 ***** 2026-02-19 03:53:41.627385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:41.627402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:41.627411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:41.627418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:41.627431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:53:41.627443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:45.214267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:45.214411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:45.214430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:53:45.214444 | orchestrator | 2026-02-19 03:53:45.214458 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-19 03:53:45.214471 | orchestrator | Thursday 19 February 2026 03:53:41 +0000 (0:00:05.007) 0:00:19.537 ***** 2026-02-19 03:53:45.214483 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:53:45.214494 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:53:45.214501 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:53:45.214508 | orchestrator | 2026-02-19 03:53:45.214515 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-19 03:53:45.214522 | orchestrator | Thursday 19 February 2026 03:53:42 +0000 (0:00:01.386) 0:00:20.923 ***** 2026-02-19 03:53:45.214529 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:53:45.214537 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:53:45.214544 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:53:45.214554 | orchestrator | 2026-02-19 03:53:45.214565 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-19 03:53:45.214576 | orchestrator | Thursday 19 February 2026 03:53:43 +0000 (0:00:00.838) 0:00:21.762 ***** 2026-02-19 03:53:45.214587 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:53:45.214615 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:53:45.214627 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:53:45.214637 | orchestrator | 2026-02-19 03:53:45.214648 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-19 03:53:45.214659 | orchestrator | Thursday 19 February 2026 03:53:44 +0000 (0:00:00.495) 0:00:22.258 ***** 2026-02-19 03:53:45.214670 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:53:45.214681 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:53:45.214692 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:53:45.214704 | orchestrator | 2026-02-19 03:53:45.214714 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-19 03:53:45.214725 | orchestrator | Thursday 19 February 2026 03:53:44 +0000 (0:00:00.299) 0:00:22.558 ***** 2026-02-19 03:53:45.214780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:53:45.214793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:45.214806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:53:45.214817 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:53:45.214828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:53:45.214847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:53:45.214868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:53:45.214880 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:53:45.214899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-19 03:54:05.210394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 03:54:05.210527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 03:54:05.210543 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:54:05.210555 | orchestrator | 2026-02-19 03:54:05.210564 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-19 03:54:05.210574 | orchestrator | Thursday 19 February 2026 03:53:45 +0000 (0:00:00.570) 0:00:23.128 ***** 2026-02-19 03:54:05.210582 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:54:05.210590 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:54:05.210598 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:54:05.210605 | orchestrator | 2026-02-19 03:54:05.210614 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-19 03:54:05.210622 | orchestrator | Thursday 19 February 2026 03:53:45 +0000 (0:00:00.383) 0:00:23.511 ***** 2026-02-19 03:54:05.210630 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-19 03:54:05.210662 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-19 03:54:05.210684 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-19 03:54:05.210692 | orchestrator | 2026-02-19 03:54:05.210700 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-19 03:54:05.210708 | orchestrator | Thursday 19 February 2026 03:53:47 +0000 (0:00:02.009) 0:00:25.521 ***** 2026-02-19 03:54:05.210716 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 03:54:05.210724 | orchestrator | 2026-02-19 03:54:05.210733 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-19 03:54:05.210746 | orchestrator | Thursday 19 February 2026 03:53:48 +0000 (0:00:00.964) 0:00:26.486 ***** 2026-02-19 03:54:05.210759 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:54:05.210772 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:54:05.210784 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:54:05.210797 | orchestrator | 2026-02-19 03:54:05.210809 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-19 03:54:05.210819 | orchestrator | Thursday 19 February 2026 03:53:49 +0000 (0:00:00.605) 0:00:27.091 ***** 2026-02-19 03:54:05.210831 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 03:54:05.210843 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 03:54:05.210857 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 03:54:05.210871 | orchestrator | 2026-02-19 03:54:05.210881 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-19 03:54:05.210890 | orchestrator | Thursday 19 February 2026 03:53:50 +0000 (0:00:01.130) 0:00:28.221 ***** 2026-02-19 03:54:05.210900 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:54:05.210910 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:54:05.210919 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:54:05.210928 | orchestrator | 2026-02-19 03:54:05.210937 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-19 03:54:05.210946 | orchestrator | Thursday 19 February 2026 03:53:50 +0000 (0:00:00.618) 0:00:28.840 ***** 2026-02-19 03:54:05.210955 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-19 03:54:05.210965 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-19 03:54:05.210974 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-19 03:54:05.210984 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-19 03:54:05.210993 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-19 03:54:05.211002 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-19 03:54:05.211012 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-19 03:54:05.211020 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-19 03:54:05.211043 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-19 03:54:05.211052 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-19 03:54:05.211059 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-19 03:54:05.211067 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-19 03:54:05.211075 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-19 03:54:05.211083 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-19 03:54:05.211099 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-19 03:54:05.211107 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-19 03:54:05.211114 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-19 03:54:05.211122 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-19 03:54:05.211130 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-19 03:54:05.211138 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-19 03:54:05.211146 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-19 03:54:05.211153 | orchestrator | 2026-02-19 03:54:05.211161 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-19 03:54:05.211169 | orchestrator | Thursday 19 February 2026 03:54:00 +0000 (0:00:09.198) 0:00:38.038 ***** 2026-02-19 03:54:05.211176 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-19 03:54:05.211184 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-19 03:54:05.211192 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-19 03:54:05.211200 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-19 03:54:05.211207 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-19 03:54:05.211221 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-19 03:54:05.211229 | orchestrator | 2026-02-19 03:54:05.211237 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-19 03:54:05.211246 | orchestrator | Thursday 19 February 2026 03:54:02 +0000 (0:00:02.718) 0:00:40.756 ***** 2026-02-19 03:54:05.211256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:54:05.211301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:55:44.597485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-19 03:55:44.597585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:55:44.597615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:55:44.597624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-19 03:55:44.597632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:55:44.597655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:55:44.597683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-19 03:55:44.597691 | orchestrator | 2026-02-19 03:55:44.597701 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-19 03:55:44.597710 | orchestrator | Thursday 19 February 2026 03:54:05 +0000 (0:00:02.366) 0:00:43.123 ***** 2026-02-19 03:55:44.597718 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:55:44.597726 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:55:44.597733 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:55:44.597740 | orchestrator | 2026-02-19 03:55:44.597748 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-19 03:55:44.597755 | orchestrator | Thursday 19 February 2026 03:54:05 +0000 (0:00:00.563) 0:00:43.687 ***** 2026-02-19 03:55:44.597763 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:55:44.597776 | orchestrator | 2026-02-19 03:55:44.597794 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-19 03:55:44.597808 | orchestrator | Thursday 19 February 2026 03:54:08 +0000 (0:00:02.417) 0:00:46.105 ***** 2026-02-19 03:55:44.597820 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:55:44.597832 | orchestrator | 2026-02-19 03:55:44.597863 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-19 03:55:44.597886 | orchestrator | Thursday 19 February 2026 03:54:10 +0000 (0:00:02.315) 0:00:48.420 ***** 2026-02-19 03:55:44.597899 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:55:44.597911 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:55:44.597923 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:55:44.597934 | orchestrator | 2026-02-19 03:55:44.597946 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-19 03:55:44.597957 | orchestrator | Thursday 19 February 2026 03:54:11 +0000 (0:00:00.902) 0:00:49.323 ***** 2026-02-19 03:55:44.597967 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:55:44.597978 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:55:44.597998 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:55:44.598010 | orchestrator | 2026-02-19 03:55:44.598104 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-19 03:55:44.598120 | orchestrator | Thursday 19 February 2026 03:54:11 +0000 (0:00:00.341) 0:00:49.664 ***** 2026-02-19 03:55:44.598134 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:55:44.598148 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:55:44.598162 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:55:44.598228 | orchestrator | 2026-02-19 03:55:44.598245 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-19 03:55:44.598264 | orchestrator | Thursday 19 February 2026 03:54:12 +0000 (0:00:00.666) 0:00:50.331 ***** 2026-02-19 03:55:44.598276 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:55:44.598288 | orchestrator | 2026-02-19 03:55:44.598300 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-19 03:55:44.598312 | orchestrator | Thursday 19 February 2026 03:54:27 +0000 (0:00:15.220) 0:01:05.551 ***** 2026-02-19 03:55:44.598325 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:55:44.598352 | orchestrator | 2026-02-19 03:55:44.598365 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-19 03:55:44.598403 | orchestrator | Thursday 19 February 2026 03:54:39 +0000 (0:00:11.473) 0:01:17.025 ***** 2026-02-19 03:55:44.598416 | orchestrator | 2026-02-19 03:55:44.598430 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-19 03:55:44.598438 | orchestrator | Thursday 19 February 2026 03:54:39 +0000 (0:00:00.071) 0:01:17.096 ***** 2026-02-19 03:55:44.598445 | orchestrator | 2026-02-19 03:55:44.598452 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-19 03:55:44.598459 | orchestrator | Thursday 19 February 2026 03:54:39 +0000 (0:00:00.087) 0:01:17.184 ***** 2026-02-19 03:55:44.598466 | orchestrator | 2026-02-19 03:55:44.598473 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-19 03:55:44.598481 | orchestrator | Thursday 19 February 2026 03:54:39 +0000 (0:00:00.081) 0:01:17.265 ***** 2026-02-19 03:55:44.598488 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:55:44.598495 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:55:44.598502 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:55:44.598509 | orchestrator | 2026-02-19 03:55:44.598516 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-19 03:55:44.598523 | orchestrator | Thursday 19 February 2026 03:55:22 +0000 (0:00:42.876) 0:02:00.141 ***** 2026-02-19 03:55:44.598530 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:55:44.598537 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:55:44.598544 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:55:44.598552 | orchestrator | 2026-02-19 03:55:44.598559 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-19 03:55:44.598566 | orchestrator | Thursday 19 February 2026 03:55:32 +0000 (0:00:10.147) 0:02:10.288 ***** 2026-02-19 03:55:44.598573 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:55:44.598580 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:55:44.598587 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:55:44.598595 | orchestrator | 2026-02-19 03:55:44.598602 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-19 03:55:44.598609 | orchestrator | Thursday 19 February 2026 03:55:43 +0000 (0:00:11.571) 0:02:21.860 ***** 2026-02-19 03:55:44.598628 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:56:40.455533 | orchestrator | 2026-02-19 03:56:40.455666 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-19 03:56:40.455749 | orchestrator | Thursday 19 February 2026 03:55:44 +0000 (0:00:00.653) 0:02:22.513 ***** 2026-02-19 03:56:40.455771 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:56:40.455792 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:56:40.455811 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:56:40.455830 | orchestrator | 2026-02-19 03:56:40.455848 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-19 03:56:40.455866 | orchestrator | Thursday 19 February 2026 03:55:45 +0000 (0:00:01.290) 0:02:23.804 ***** 2026-02-19 03:56:40.455885 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:56:40.455905 | orchestrator | 2026-02-19 03:56:40.455925 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-19 03:56:40.455944 | orchestrator | Thursday 19 February 2026 03:55:47 +0000 (0:00:01.804) 0:02:25.609 ***** 2026-02-19 03:56:40.455963 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-19 03:56:40.455983 | orchestrator | 2026-02-19 03:56:40.456002 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-19 03:56:40.456021 | orchestrator | Thursday 19 February 2026 03:56:00 +0000 (0:00:12.873) 0:02:38.482 ***** 2026-02-19 03:56:40.456040 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-19 03:56:40.456060 | orchestrator | 2026-02-19 03:56:40.456082 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-19 03:56:40.456148 | orchestrator | Thursday 19 February 2026 03:56:27 +0000 (0:00:27.232) 0:03:05.714 ***** 2026-02-19 03:56:40.456324 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-19 03:56:40.456354 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-19 03:56:40.456370 | orchestrator | 2026-02-19 03:56:40.456388 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-19 03:56:40.456407 | orchestrator | Thursday 19 February 2026 03:56:35 +0000 (0:00:07.219) 0:03:12.934 ***** 2026-02-19 03:56:40.456423 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:56:40.456440 | orchestrator | 2026-02-19 03:56:40.456517 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-19 03:56:40.456533 | orchestrator | Thursday 19 February 2026 03:56:35 +0000 (0:00:00.143) 0:03:13.077 ***** 2026-02-19 03:56:40.456548 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:56:40.456564 | orchestrator | 2026-02-19 03:56:40.456579 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-19 03:56:40.456612 | orchestrator | Thursday 19 February 2026 03:56:35 +0000 (0:00:00.131) 0:03:13.209 ***** 2026-02-19 03:56:40.456627 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:56:40.456641 | orchestrator | 2026-02-19 03:56:40.456655 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-19 03:56:40.456667 | orchestrator | Thursday 19 February 2026 03:56:35 +0000 (0:00:00.147) 0:03:13.356 ***** 2026-02-19 03:56:40.456678 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:56:40.456690 | orchestrator | 2026-02-19 03:56:40.456701 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-19 03:56:40.456753 | orchestrator | Thursday 19 February 2026 03:56:36 +0000 (0:00:00.611) 0:03:13.967 ***** 2026-02-19 03:56:40.456771 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:56:40.456783 | orchestrator | 2026-02-19 03:56:40.456796 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-19 03:56:40.456810 | orchestrator | Thursday 19 February 2026 03:56:39 +0000 (0:00:03.456) 0:03:17.424 ***** 2026-02-19 03:56:40.456822 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:56:40.456835 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:56:40.456849 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:56:40.456863 | orchestrator | 2026-02-19 03:56:40.456876 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:56:40.456890 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 03:56:40.456905 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-19 03:56:40.456913 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-19 03:56:40.456921 | orchestrator | 2026-02-19 03:56:40.456929 | orchestrator | 2026-02-19 03:56:40.456937 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:56:40.456945 | orchestrator | Thursday 19 February 2026 03:56:39 +0000 (0:00:00.474) 0:03:17.899 ***** 2026-02-19 03:56:40.456953 | orchestrator | =============================================================================== 2026-02-19 03:56:40.456960 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 42.88s 2026-02-19 03:56:40.456969 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.23s 2026-02-19 03:56:40.456976 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.22s 2026-02-19 03:56:40.456984 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.87s 2026-02-19 03:56:40.456992 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.57s 2026-02-19 03:56:40.456999 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.47s 2026-02-19 03:56:40.457007 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.15s 2026-02-19 03:56:40.457061 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.20s 2026-02-19 03:56:40.457070 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.22s 2026-02-19 03:56:40.457100 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.01s 2026-02-19 03:56:40.457109 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.71s 2026-02-19 03:56:40.457116 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.51s 2026-02-19 03:56:40.457124 | orchestrator | keystone : Creating default user role ----------------------------------- 3.46s 2026-02-19 03:56:40.457132 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.72s 2026-02-19 03:56:40.457140 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.42s 2026-02-19 03:56:40.457147 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.37s 2026-02-19 03:56:40.457155 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.32s 2026-02-19 03:56:40.457163 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.02s 2026-02-19 03:56:40.457170 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.01s 2026-02-19 03:56:40.457178 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.80s 2026-02-19 03:56:42.901708 | orchestrator | 2026-02-19 03:56:42 | INFO  | Task 5b8fa450-b021-4e40-8258-e376955b5780 (placement) was prepared for execution. 2026-02-19 03:56:42.901809 | orchestrator | 2026-02-19 03:56:42 | INFO  | It takes a moment until task 5b8fa450-b021-4e40-8258-e376955b5780 (placement) has been started and output is visible here. 2026-02-19 03:57:19.967917 | orchestrator | 2026-02-19 03:57:19.968033 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:57:19.968049 | orchestrator | 2026-02-19 03:57:19.968058 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:57:19.968069 | orchestrator | Thursday 19 February 2026 03:56:47 +0000 (0:00:00.260) 0:00:00.260 ***** 2026-02-19 03:57:19.968078 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:57:19.968090 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:57:19.968096 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:57:19.968102 | orchestrator | 2026-02-19 03:57:19.968108 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:57:19.968114 | orchestrator | Thursday 19 February 2026 03:56:47 +0000 (0:00:00.310) 0:00:00.571 ***** 2026-02-19 03:57:19.968120 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-19 03:57:19.968139 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-19 03:57:19.968144 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-19 03:57:19.968150 | orchestrator | 2026-02-19 03:57:19.968155 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-19 03:57:19.968161 | orchestrator | 2026-02-19 03:57:19.968166 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-19 03:57:19.968172 | orchestrator | Thursday 19 February 2026 03:56:47 +0000 (0:00:00.460) 0:00:01.032 ***** 2026-02-19 03:57:19.968178 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:57:19.968184 | orchestrator | 2026-02-19 03:57:19.968190 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-19 03:57:19.968221 | orchestrator | Thursday 19 February 2026 03:56:48 +0000 (0:00:00.604) 0:00:01.636 ***** 2026-02-19 03:57:19.968227 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-19 03:57:19.968232 | orchestrator | 2026-02-19 03:57:19.968238 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-19 03:57:19.968244 | orchestrator | Thursday 19 February 2026 03:56:52 +0000 (0:00:04.219) 0:00:05.856 ***** 2026-02-19 03:57:19.968275 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-19 03:57:19.968286 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-19 03:57:19.968295 | orchestrator | 2026-02-19 03:57:19.968304 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-19 03:57:19.968313 | orchestrator | Thursday 19 February 2026 03:56:59 +0000 (0:00:07.159) 0:00:13.015 ***** 2026-02-19 03:57:19.968322 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-19 03:57:19.968331 | orchestrator | 2026-02-19 03:57:19.968336 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-19 03:57:19.968342 | orchestrator | Thursday 19 February 2026 03:57:03 +0000 (0:00:03.954) 0:00:16.970 ***** 2026-02-19 03:57:19.968347 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 03:57:19.968352 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-19 03:57:19.968358 | orchestrator | 2026-02-19 03:57:19.968363 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-19 03:57:19.968369 | orchestrator | Thursday 19 February 2026 03:57:08 +0000 (0:00:04.330) 0:00:21.300 ***** 2026-02-19 03:57:19.968374 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 03:57:19.968380 | orchestrator | 2026-02-19 03:57:19.968385 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-19 03:57:19.968393 | orchestrator | Thursday 19 February 2026 03:57:11 +0000 (0:00:03.427) 0:00:24.727 ***** 2026-02-19 03:57:19.968401 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-19 03:57:19.968410 | orchestrator | 2026-02-19 03:57:19.968419 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-19 03:57:19.968428 | orchestrator | Thursday 19 February 2026 03:57:15 +0000 (0:00:03.918) 0:00:28.646 ***** 2026-02-19 03:57:19.968438 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:57:19.968444 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:57:19.968453 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:57:19.968462 | orchestrator | 2026-02-19 03:57:19.968472 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-19 03:57:19.968481 | orchestrator | Thursday 19 February 2026 03:57:15 +0000 (0:00:00.322) 0:00:28.968 ***** 2026-02-19 03:57:19.968495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:19.968529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:19.968544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:19.968551 | orchestrator | 2026-02-19 03:57:19.968558 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-19 03:57:19.968564 | orchestrator | Thursday 19 February 2026 03:57:17 +0000 (0:00:01.233) 0:00:30.202 ***** 2026-02-19 03:57:19.968570 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:57:19.968577 | orchestrator | 2026-02-19 03:57:19.968583 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-19 03:57:19.968636 | orchestrator | Thursday 19 February 2026 03:57:17 +0000 (0:00:00.371) 0:00:30.573 ***** 2026-02-19 03:57:19.968642 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:57:19.968652 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:57:19.968663 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:57:19.968673 | orchestrator | 2026-02-19 03:57:19.968698 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-19 03:57:19.968715 | orchestrator | Thursday 19 February 2026 03:57:17 +0000 (0:00:00.330) 0:00:30.904 ***** 2026-02-19 03:57:19.968726 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 03:57:19.968736 | orchestrator | 2026-02-19 03:57:19.968746 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-19 03:57:19.968756 | orchestrator | Thursday 19 February 2026 03:57:18 +0000 (0:00:00.526) 0:00:31.430 ***** 2026-02-19 03:57:19.968766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:19.968783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:22.951434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:22.951560 | orchestrator | 2026-02-19 03:57:22.951587 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-19 03:57:22.951606 | orchestrator | Thursday 19 February 2026 03:57:19 +0000 (0:00:01.664) 0:00:33.094 ***** 2026-02-19 03:57:22.951626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:22.951643 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:57:22.951663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:22.951681 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:57:22.951699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:22.951748 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:57:22.951767 | orchestrator | 2026-02-19 03:57:22.951782 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-19 03:57:22.951818 | orchestrator | Thursday 19 February 2026 03:57:20 +0000 (0:00:00.488) 0:00:33.583 ***** 2026-02-19 03:57:22.951847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:22.951865 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:57:22.951880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:22.951891 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:57:22.951901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:22.951911 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:57:22.951926 | orchestrator | 2026-02-19 03:57:22.951943 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-19 03:57:22.951960 | orchestrator | Thursday 19 February 2026 03:57:21 +0000 (0:00:00.743) 0:00:34.327 ***** 2026-02-19 03:57:22.951990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:22.952027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:30.248966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:30.249067 | orchestrator | 2026-02-19 03:57:30.249085 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-19 03:57:30.249099 | orchestrator | Thursday 19 February 2026 03:57:22 +0000 (0:00:01.756) 0:00:36.084 ***** 2026-02-19 03:57:30.249112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:30.249127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:30.249189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:30.249267 | orchestrator | 2026-02-19 03:57:30.249290 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-19 03:57:30.249303 | orchestrator | Thursday 19 February 2026 03:57:25 +0000 (0:00:02.329) 0:00:38.413 ***** 2026-02-19 03:57:30.249336 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-19 03:57:30.249352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-19 03:57:30.249364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-19 03:57:30.249378 | orchestrator | 2026-02-19 03:57:30.249391 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-19 03:57:30.249405 | orchestrator | Thursday 19 February 2026 03:57:26 +0000 (0:00:01.462) 0:00:39.876 ***** 2026-02-19 03:57:30.249418 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:57:30.249430 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:57:30.249438 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:57:30.249446 | orchestrator | 2026-02-19 03:57:30.249454 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-19 03:57:30.249462 | orchestrator | Thursday 19 February 2026 03:57:28 +0000 (0:00:01.467) 0:00:41.343 ***** 2026-02-19 03:57:30.249471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:30.249489 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:57:30.249500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:30.249517 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:57:30.249537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-19 03:57:30.249551 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:57:30.249565 | orchestrator | 2026-02-19 03:57:30.249588 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-19 03:57:30.249602 | orchestrator | Thursday 19 February 2026 03:57:29 +0000 (0:00:00.845) 0:00:42.189 ***** 2026-02-19 03:57:30.249621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:55.390376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:55.390467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-19 03:57:55.390474 | orchestrator | 2026-02-19 03:57:55.390481 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-19 03:57:55.390486 | orchestrator | Thursday 19 February 2026 03:57:30 +0000 (0:00:01.196) 0:00:43.386 ***** 2026-02-19 03:57:55.390490 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:57:55.390495 | orchestrator | 2026-02-19 03:57:55.390499 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-19 03:57:55.390503 | orchestrator | Thursday 19 February 2026 03:57:32 +0000 (0:00:02.347) 0:00:45.734 ***** 2026-02-19 03:57:55.390507 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:57:55.390511 | orchestrator | 2026-02-19 03:57:55.390514 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-19 03:57:55.390518 | orchestrator | Thursday 19 February 2026 03:57:34 +0000 (0:00:02.394) 0:00:48.128 ***** 2026-02-19 03:57:55.390522 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:57:55.390525 | orchestrator | 2026-02-19 03:57:55.390529 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-19 03:57:55.390533 | orchestrator | Thursday 19 February 2026 03:57:49 +0000 (0:00:14.842) 0:01:02.970 ***** 2026-02-19 03:57:55.390537 | orchestrator | 2026-02-19 03:57:55.390540 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-19 03:57:55.390544 | orchestrator | Thursday 19 February 2026 03:57:49 +0000 (0:00:00.068) 0:01:03.039 ***** 2026-02-19 03:57:55.390548 | orchestrator | 2026-02-19 03:57:55.390551 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-19 03:57:55.390555 | orchestrator | Thursday 19 February 2026 03:57:49 +0000 (0:00:00.069) 0:01:03.109 ***** 2026-02-19 03:57:55.390559 | orchestrator | 2026-02-19 03:57:55.390562 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-19 03:57:55.390593 | orchestrator | Thursday 19 February 2026 03:57:50 +0000 (0:00:00.079) 0:01:03.188 ***** 2026-02-19 03:57:55.390609 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:57:55.390615 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:57:55.390621 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:57:55.390626 | orchestrator | 2026-02-19 03:57:55.390632 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 03:57:55.390639 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 03:57:55.390646 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 03:57:55.390653 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 03:57:55.390659 | orchestrator | 2026-02-19 03:57:55.390665 | orchestrator | 2026-02-19 03:57:55.390671 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 03:57:55.390685 | orchestrator | Thursday 19 February 2026 03:57:55 +0000 (0:00:04.962) 0:01:08.151 ***** 2026-02-19 03:57:55.390691 | orchestrator | =============================================================================== 2026-02-19 03:57:55.390697 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.84s 2026-02-19 03:57:55.390716 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.16s 2026-02-19 03:57:55.390723 | orchestrator | placement : Restart placement-api container ----------------------------- 4.96s 2026-02-19 03:57:55.390729 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.33s 2026-02-19 03:57:55.390735 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.22s 2026-02-19 03:57:55.390741 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.95s 2026-02-19 03:57:55.390747 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.92s 2026-02-19 03:57:55.390753 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.43s 2026-02-19 03:57:55.390759 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.39s 2026-02-19 03:57:55.390765 | orchestrator | placement : Creating placement databases -------------------------------- 2.35s 2026-02-19 03:57:55.390771 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.33s 2026-02-19 03:57:55.390776 | orchestrator | placement : Copying over config.json files for services ----------------- 1.76s 2026-02-19 03:57:55.390783 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.66s 2026-02-19 03:57:55.390790 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.47s 2026-02-19 03:57:55.390796 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.46s 2026-02-19 03:57:55.390802 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.23s 2026-02-19 03:57:55.390808 | orchestrator | placement : Check placement containers ---------------------------------- 1.20s 2026-02-19 03:57:55.390814 | orchestrator | placement : Copying over existing policy file --------------------------- 0.85s 2026-02-19 03:57:55.390821 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.74s 2026-02-19 03:57:55.390828 | orchestrator | placement : include_tasks ----------------------------------------------- 0.60s 2026-02-19 03:57:57.899575 | orchestrator | 2026-02-19 03:57:57 | INFO  | Task 3766a59b-d6b6-405b-9c02-9503e5dad3a4 (neutron) was prepared for execution. 2026-02-19 03:57:57.899673 | orchestrator | 2026-02-19 03:57:57 | INFO  | It takes a moment until task 3766a59b-d6b6-405b-9c02-9503e5dad3a4 (neutron) has been started and output is visible here. 2026-02-19 03:58:49.042977 | orchestrator | 2026-02-19 03:58:49.043119 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 03:58:49.043149 | orchestrator | 2026-02-19 03:58:49.043171 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 03:58:49.043191 | orchestrator | Thursday 19 February 2026 03:58:02 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-19 03:58:49.043210 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:58:49.043262 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:58:49.043281 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:58:49.043300 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:58:49.043320 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:58:49.043338 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:58:49.043358 | orchestrator | 2026-02-19 03:58:49.043377 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 03:58:49.043397 | orchestrator | Thursday 19 February 2026 03:58:02 +0000 (0:00:00.701) 0:00:00.964 ***** 2026-02-19 03:58:49.043416 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-19 03:58:49.043436 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-19 03:58:49.043455 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-19 03:58:49.043509 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-19 03:58:49.043530 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-19 03:58:49.043549 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-19 03:58:49.043568 | orchestrator | 2026-02-19 03:58:49.043588 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-19 03:58:49.043607 | orchestrator | 2026-02-19 03:58:49.043626 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-19 03:58:49.043666 | orchestrator | Thursday 19 February 2026 03:58:03 +0000 (0:00:00.616) 0:00:01.581 ***** 2026-02-19 03:58:49.043689 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:58:49.043709 | orchestrator | 2026-02-19 03:58:49.043727 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-19 03:58:49.043747 | orchestrator | Thursday 19 February 2026 03:58:04 +0000 (0:00:01.314) 0:00:02.896 ***** 2026-02-19 03:58:49.043766 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:58:49.043786 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:58:49.043806 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:58:49.043825 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:58:49.043845 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:58:49.043863 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:58:49.043879 | orchestrator | 2026-02-19 03:58:49.043897 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-19 03:58:49.043916 | orchestrator | Thursday 19 February 2026 03:58:06 +0000 (0:00:01.453) 0:00:04.349 ***** 2026-02-19 03:58:49.043935 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:58:49.043954 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:58:49.043972 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:58:49.043989 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:58:49.044008 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:58:49.044026 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:58:49.044047 | orchestrator | 2026-02-19 03:58:49.044066 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-19 03:58:49.044085 | orchestrator | Thursday 19 February 2026 03:58:07 +0000 (0:00:01.238) 0:00:05.588 ***** 2026-02-19 03:58:49.044103 | orchestrator | ok: [testbed-node-0] => { 2026-02-19 03:58:49.044124 | orchestrator |  "changed": false, 2026-02-19 03:58:49.044143 | orchestrator |  "msg": "All assertions passed" 2026-02-19 03:58:49.044163 | orchestrator | } 2026-02-19 03:58:49.044183 | orchestrator | ok: [testbed-node-1] => { 2026-02-19 03:58:49.044203 | orchestrator |  "changed": false, 2026-02-19 03:58:49.044263 | orchestrator |  "msg": "All assertions passed" 2026-02-19 03:58:49.044283 | orchestrator | } 2026-02-19 03:58:49.044301 | orchestrator | ok: [testbed-node-2] => { 2026-02-19 03:58:49.044320 | orchestrator |  "changed": false, 2026-02-19 03:58:49.044339 | orchestrator |  "msg": "All assertions passed" 2026-02-19 03:58:49.044357 | orchestrator | } 2026-02-19 03:58:49.044376 | orchestrator | ok: [testbed-node-3] => { 2026-02-19 03:58:49.044394 | orchestrator |  "changed": false, 2026-02-19 03:58:49.044413 | orchestrator |  "msg": "All assertions passed" 2026-02-19 03:58:49.044432 | orchestrator | } 2026-02-19 03:58:49.044451 | orchestrator | ok: [testbed-node-4] => { 2026-02-19 03:58:49.044471 | orchestrator |  "changed": false, 2026-02-19 03:58:49.044490 | orchestrator |  "msg": "All assertions passed" 2026-02-19 03:58:49.044508 | orchestrator | } 2026-02-19 03:58:49.044526 | orchestrator | ok: [testbed-node-5] => { 2026-02-19 03:58:49.044545 | orchestrator |  "changed": false, 2026-02-19 03:58:49.044565 | orchestrator |  "msg": "All assertions passed" 2026-02-19 03:58:49.044584 | orchestrator | } 2026-02-19 03:58:49.044602 | orchestrator | 2026-02-19 03:58:49.044620 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-19 03:58:49.044638 | orchestrator | Thursday 19 February 2026 03:58:08 +0000 (0:00:00.934) 0:00:06.523 ***** 2026-02-19 03:58:49.044656 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:58:49.044692 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:58:49.044711 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:58:49.044729 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:58:49.044740 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:58:49.044751 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:58:49.044761 | orchestrator | 2026-02-19 03:58:49.044772 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-19 03:58:49.044783 | orchestrator | Thursday 19 February 2026 03:58:09 +0000 (0:00:00.644) 0:00:07.168 ***** 2026-02-19 03:58:49.044794 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-19 03:58:49.044805 | orchestrator | 2026-02-19 03:58:49.044815 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-19 03:58:49.044826 | orchestrator | Thursday 19 February 2026 03:58:13 +0000 (0:00:04.247) 0:00:11.415 ***** 2026-02-19 03:58:49.044841 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-19 03:58:49.044862 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-19 03:58:49.044880 | orchestrator | 2026-02-19 03:58:49.044927 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-19 03:58:49.044947 | orchestrator | Thursday 19 February 2026 03:58:20 +0000 (0:00:07.389) 0:00:18.805 ***** 2026-02-19 03:58:49.044967 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 03:58:49.044986 | orchestrator | 2026-02-19 03:58:49.045006 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-19 03:58:49.045026 | orchestrator | Thursday 19 February 2026 03:58:24 +0000 (0:00:03.393) 0:00:22.198 ***** 2026-02-19 03:58:49.045046 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 03:58:49.045064 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-19 03:58:49.045084 | orchestrator | 2026-02-19 03:58:49.045102 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-19 03:58:49.045120 | orchestrator | Thursday 19 February 2026 03:58:28 +0000 (0:00:04.244) 0:00:26.442 ***** 2026-02-19 03:58:49.045139 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 03:58:49.045158 | orchestrator | 2026-02-19 03:58:49.045177 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-19 03:58:49.045197 | orchestrator | Thursday 19 February 2026 03:58:31 +0000 (0:00:03.356) 0:00:29.799 ***** 2026-02-19 03:58:49.045241 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-19 03:58:49.045262 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-19 03:58:49.045281 | orchestrator | 2026-02-19 03:58:49.045299 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-19 03:58:49.045318 | orchestrator | Thursday 19 February 2026 03:58:40 +0000 (0:00:08.534) 0:00:38.333 ***** 2026-02-19 03:58:49.045337 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:58:49.045368 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:58:49.045386 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:58:49.045405 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:58:49.045422 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:58:49.045440 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:58:49.045459 | orchestrator | 2026-02-19 03:58:49.045478 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-19 03:58:49.045497 | orchestrator | Thursday 19 February 2026 03:58:40 +0000 (0:00:00.650) 0:00:38.984 ***** 2026-02-19 03:58:49.045516 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:58:49.045534 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:58:49.045553 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:58:49.045573 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:58:49.045591 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:58:49.045610 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:58:49.045642 | orchestrator | 2026-02-19 03:58:49.045661 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-19 03:58:49.045680 | orchestrator | Thursday 19 February 2026 03:58:42 +0000 (0:00:02.022) 0:00:41.006 ***** 2026-02-19 03:58:49.045699 | orchestrator | ok: [testbed-node-0] 2026-02-19 03:58:49.045717 | orchestrator | ok: [testbed-node-1] 2026-02-19 03:58:49.045735 | orchestrator | ok: [testbed-node-2] 2026-02-19 03:58:49.045753 | orchestrator | ok: [testbed-node-3] 2026-02-19 03:58:49.045772 | orchestrator | ok: [testbed-node-4] 2026-02-19 03:58:49.045791 | orchestrator | ok: [testbed-node-5] 2026-02-19 03:58:49.045809 | orchestrator | 2026-02-19 03:58:49.045827 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-19 03:58:49.045846 | orchestrator | Thursday 19 February 2026 03:58:44 +0000 (0:00:01.250) 0:00:42.256 ***** 2026-02-19 03:58:49.045865 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:58:49.045884 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:58:49.045902 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:58:49.045920 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:58:49.045938 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:58:49.045957 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:58:49.045976 | orchestrator | 2026-02-19 03:58:49.045995 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-19 03:58:49.046013 | orchestrator | Thursday 19 February 2026 03:58:46 +0000 (0:00:02.215) 0:00:44.472 ***** 2026-02-19 03:58:49.046129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:58:49.046175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:58:54.437274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:58:54.437385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:58:54.437403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:58:54.437414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:58:54.437422 | orchestrator | 2026-02-19 03:58:54.437433 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-19 03:58:54.437443 | orchestrator | Thursday 19 February 2026 03:58:49 +0000 (0:00:02.641) 0:00:47.113 ***** 2026-02-19 03:58:54.437451 | orchestrator | [WARNING]: Skipped 2026-02-19 03:58:54.437460 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-19 03:58:54.437470 | orchestrator | due to this access issue: 2026-02-19 03:58:54.437479 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-19 03:58:54.437488 | orchestrator | a directory 2026-02-19 03:58:54.437497 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 03:58:54.437505 | orchestrator | 2026-02-19 03:58:54.437514 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-19 03:58:54.437522 | orchestrator | Thursday 19 February 2026 03:58:49 +0000 (0:00:00.846) 0:00:47.960 ***** 2026-02-19 03:58:54.437544 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 03:58:54.437551 | orchestrator | 2026-02-19 03:58:54.437556 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-19 03:58:54.437560 | orchestrator | Thursday 19 February 2026 03:58:51 +0000 (0:00:01.298) 0:00:49.258 ***** 2026-02-19 03:58:54.437571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:58:54.437583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:58:54.437589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:58:54.437594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:58:54.437604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:58:59.378172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:58:59.378316 | orchestrator | 2026-02-19 03:58:59.378328 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-19 03:58:59.378335 | orchestrator | Thursday 19 February 2026 03:58:54 +0000 (0:00:03.247) 0:00:52.505 ***** 2026-02-19 03:58:59.378343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:58:59.378350 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:58:59.378357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:58:59.378362 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:58:59.378368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:58:59.378386 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:58:59.378404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:58:59.378409 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:58:59.378419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:58:59.378425 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:58:59.378430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:58:59.378435 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:58:59.378441 | orchestrator | 2026-02-19 03:58:59.378446 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-19 03:58:59.378451 | orchestrator | Thursday 19 February 2026 03:58:56 +0000 (0:00:02.097) 0:00:54.603 ***** 2026-02-19 03:58:59.378456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:58:59.378462 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:58:59.378470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:04.878820 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:04.878914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:04.878925 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:04.878932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:04.878939 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:04.878946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:04.878952 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:04.878958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:04.878981 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:04.878988 | orchestrator | 2026-02-19 03:59:04.878995 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-19 03:59:04.879003 | orchestrator | Thursday 19 February 2026 03:58:59 +0000 (0:00:02.846) 0:00:57.449 ***** 2026-02-19 03:59:04.879009 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:04.879015 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:04.879020 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:04.879026 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:04.879032 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:04.879038 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:04.879043 | orchestrator | 2026-02-19 03:59:04.879049 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-19 03:59:04.879055 | orchestrator | Thursday 19 February 2026 03:59:01 +0000 (0:00:02.393) 0:00:59.842 ***** 2026-02-19 03:59:04.879061 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:04.879067 | orchestrator | 2026-02-19 03:59:04.879073 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-19 03:59:04.879088 | orchestrator | Thursday 19 February 2026 03:59:01 +0000 (0:00:00.152) 0:00:59.994 ***** 2026-02-19 03:59:04.879095 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:04.879101 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:04.879106 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:04.879112 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:04.879118 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:04.879124 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:04.879129 | orchestrator | 2026-02-19 03:59:04.879135 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-19 03:59:04.879141 | orchestrator | Thursday 19 February 2026 03:59:02 +0000 (0:00:00.654) 0:01:00.649 ***** 2026-02-19 03:59:04.879151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:04.879157 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:04.879163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:04.879175 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:04.879181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:04.879187 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:04.879193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:04.879199 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:04.879213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:13.313981 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:13.314125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:13.314141 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:13.314147 | orchestrator | 2026-02-19 03:59:13.314155 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-19 03:59:13.314164 | orchestrator | Thursday 19 February 2026 03:59:04 +0000 (0:00:02.297) 0:01:02.947 ***** 2026-02-19 03:59:13.314172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:13.314204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:13.314211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:13.314303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:59:13.314314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:59:13.314327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:59:13.314333 | orchestrator | 2026-02-19 03:59:13.314339 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-19 03:59:13.314345 | orchestrator | Thursday 19 February 2026 03:59:08 +0000 (0:00:03.205) 0:01:06.152 ***** 2026-02-19 03:59:13.314352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:13.314359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:13.314376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:59:17.927664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:59:17.927790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 03:59:17.927803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:17.927811 | orchestrator | 2026-02-19 03:59:17.927820 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-19 03:59:17.927828 | orchestrator | Thursday 19 February 2026 03:59:13 +0000 (0:00:05.232) 0:01:11.385 ***** 2026-02-19 03:59:17.927847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:17.927855 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:17.927876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:17.927888 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:17.927895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:17.927901 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:17.927908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:17.927914 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:17.927921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:17.927927 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:17.927938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:17.927945 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:17.927956 | orchestrator | 2026-02-19 03:59:17.927962 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-19 03:59:17.927969 | orchestrator | Thursday 19 February 2026 03:59:15 +0000 (0:00:02.080) 0:01:13.465 ***** 2026-02-19 03:59:17.927975 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:17.927981 | orchestrator | changed: [testbed-node-0] 2026-02-19 03:59:17.927987 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:17.927994 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:17.928000 | orchestrator | changed: [testbed-node-2] 2026-02-19 03:59:17.928010 | orchestrator | changed: [testbed-node-1] 2026-02-19 03:59:37.478567 | orchestrator | 2026-02-19 03:59:37.478669 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-19 03:59:37.478686 | orchestrator | Thursday 19 February 2026 03:59:17 +0000 (0:00:02.527) 0:01:15.993 ***** 2026-02-19 03:59:37.478701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:37.478715 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:37.478728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:37.478737 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:37.478746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:37.478756 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:37.478781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:37.478830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:37.478837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 03:59:37.478843 | orchestrator | 2026-02-19 03:59:37.478849 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-19 03:59:37.478854 | orchestrator | Thursday 19 February 2026 03:59:21 +0000 (0:00:03.603) 0:01:19.597 ***** 2026-02-19 03:59:37.478860 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:37.478865 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:37.478870 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:37.478876 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:37.478881 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:37.478886 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:37.478891 | orchestrator | 2026-02-19 03:59:37.478897 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-19 03:59:37.478902 | orchestrator | Thursday 19 February 2026 03:59:23 +0000 (0:00:02.273) 0:01:21.871 ***** 2026-02-19 03:59:37.478908 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:37.478913 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:37.478918 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:37.478924 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:37.478929 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:37.478934 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:37.478940 | orchestrator | 2026-02-19 03:59:37.478945 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-19 03:59:37.478950 | orchestrator | Thursday 19 February 2026 03:59:26 +0000 (0:00:02.248) 0:01:24.119 ***** 2026-02-19 03:59:37.478956 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:37.478961 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:37.478967 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:37.478972 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:37.478977 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:37.478988 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:37.478993 | orchestrator | 2026-02-19 03:59:37.478998 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-19 03:59:37.479004 | orchestrator | Thursday 19 February 2026 03:59:28 +0000 (0:00:02.175) 0:01:26.295 ***** 2026-02-19 03:59:37.479009 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:37.479014 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:37.479020 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:37.479025 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:37.479030 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:37.479035 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:37.479041 | orchestrator | 2026-02-19 03:59:37.479046 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-19 03:59:37.479051 | orchestrator | Thursday 19 February 2026 03:59:30 +0000 (0:00:02.176) 0:01:28.471 ***** 2026-02-19 03:59:37.479057 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:37.479062 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:37.479067 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:37.479073 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:37.479078 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:37.479083 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:37.479088 | orchestrator | 2026-02-19 03:59:37.479094 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-19 03:59:37.479099 | orchestrator | Thursday 19 February 2026 03:59:32 +0000 (0:00:02.396) 0:01:30.868 ***** 2026-02-19 03:59:37.479104 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:37.479114 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:37.479119 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:37.479124 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:37.479130 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:37.479135 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:37.479140 | orchestrator | 2026-02-19 03:59:37.479146 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-19 03:59:37.479151 | orchestrator | Thursday 19 February 2026 03:59:34 +0000 (0:00:02.177) 0:01:33.045 ***** 2026-02-19 03:59:37.479156 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-19 03:59:37.479162 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:37.479168 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-19 03:59:37.479173 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:37.479179 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-19 03:59:37.479188 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:41.942686 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-19 03:59:41.942783 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:41.942795 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-19 03:59:41.942804 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:41.942812 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-19 03:59:41.942820 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:41.942828 | orchestrator | 2026-02-19 03:59:41.942837 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-19 03:59:41.942846 | orchestrator | Thursday 19 February 2026 03:59:37 +0000 (0:00:02.495) 0:01:35.541 ***** 2026-02-19 03:59:41.942858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:41.942889 | orchestrator | skipping: [testbed-node-2] 2026-02-19 03:59:41.942898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:41.942906 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:41.942928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:41.942937 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:41.942960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:41.942971 | orchestrator | skipping: [testbed-node-3] 2026-02-19 03:59:41.942979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:41.942994 | orchestrator | skipping: [testbed-node-4] 2026-02-19 03:59:41.943003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 03:59:41.943011 | orchestrator | skipping: [testbed-node-5] 2026-02-19 03:59:41.943019 | orchestrator | 2026-02-19 03:59:41.943027 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-19 03:59:41.943035 | orchestrator | Thursday 19 February 2026 03:59:39 +0000 (0:00:02.166) 0:01:37.707 ***** 2026-02-19 03:59:41.943043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:41.943051 | orchestrator | skipping: [testbed-node-0] 2026-02-19 03:59:41.943064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 03:59:41.943072 | orchestrator | skipping: [testbed-node-1] 2026-02-19 03:59:41.943087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 04:00:07.094758 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.094881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 04:00:07.094909 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.094930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 04:00:07.094951 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.094972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 04:00:07.094992 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.095013 | orchestrator | 2026-02-19 04:00:07.095034 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-19 04:00:07.095055 | orchestrator | Thursday 19 February 2026 03:59:41 +0000 (0:00:02.306) 0:01:40.014 ***** 2026-02-19 04:00:07.095075 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.095095 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.095116 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.095159 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.095182 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.095204 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.095224 | orchestrator | 2026-02-19 04:00:07.095267 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-19 04:00:07.095289 | orchestrator | Thursday 19 February 2026 03:59:44 +0000 (0:00:02.086) 0:01:42.101 ***** 2026-02-19 04:00:07.095309 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.095329 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.095350 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.095370 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:00:07.095391 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:00:07.095413 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:00:07.095493 | orchestrator | 2026-02-19 04:00:07.095516 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-19 04:00:07.095536 | orchestrator | Thursday 19 February 2026 03:59:47 +0000 (0:00:03.736) 0:01:45.838 ***** 2026-02-19 04:00:07.095556 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.095574 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.095594 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.095612 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.095631 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.095650 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.095669 | orchestrator | 2026-02-19 04:00:07.095689 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-19 04:00:07.095708 | orchestrator | Thursday 19 February 2026 03:59:50 +0000 (0:00:02.332) 0:01:48.170 ***** 2026-02-19 04:00:07.095727 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.095744 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.095763 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.095782 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.095801 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.095819 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.095837 | orchestrator | 2026-02-19 04:00:07.095857 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-19 04:00:07.095899 | orchestrator | Thursday 19 February 2026 03:59:52 +0000 (0:00:02.256) 0:01:50.427 ***** 2026-02-19 04:00:07.095919 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.095937 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.095956 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.095990 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.096009 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.096027 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.096045 | orchestrator | 2026-02-19 04:00:07.096065 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-19 04:00:07.096086 | orchestrator | Thursday 19 February 2026 03:59:54 +0000 (0:00:02.084) 0:01:52.512 ***** 2026-02-19 04:00:07.096105 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.096123 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.096141 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.096161 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.096179 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.096197 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.096215 | orchestrator | 2026-02-19 04:00:07.096235 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-19 04:00:07.096347 | orchestrator | Thursday 19 February 2026 03:59:56 +0000 (0:00:02.366) 0:01:54.878 ***** 2026-02-19 04:00:07.096367 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.096386 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.096406 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.096425 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.096443 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.096461 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.096480 | orchestrator | 2026-02-19 04:00:07.096498 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-19 04:00:07.096518 | orchestrator | Thursday 19 February 2026 03:59:58 +0000 (0:00:01.939) 0:01:56.818 ***** 2026-02-19 04:00:07.096537 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.096555 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.096566 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.096576 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.096587 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.096598 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.096609 | orchestrator | 2026-02-19 04:00:07.096620 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-19 04:00:07.096643 | orchestrator | Thursday 19 February 2026 04:00:00 +0000 (0:00:01.911) 0:01:58.729 ***** 2026-02-19 04:00:07.096654 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.096665 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.096676 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.096686 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.096697 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.096708 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.096718 | orchestrator | 2026-02-19 04:00:07.096729 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-19 04:00:07.096740 | orchestrator | Thursday 19 February 2026 04:00:02 +0000 (0:00:02.337) 0:02:01.067 ***** 2026-02-19 04:00:07.096752 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-19 04:00:07.096765 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-19 04:00:07.096775 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:07.096786 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:07.096797 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-19 04:00:07.096808 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.096819 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-19 04:00:07.096830 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:07.096841 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-19 04:00:07.096860 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:07.096872 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-19 04:00:07.096883 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:07.096894 | orchestrator | 2026-02-19 04:00:07.096905 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-19 04:00:07.096916 | orchestrator | Thursday 19 February 2026 04:00:05 +0000 (0:00:02.039) 0:02:03.106 ***** 2026-02-19 04:00:07.096928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 04:00:07.096942 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:00:07.096966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 04:00:09.587738 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:00:09.587858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-19 04:00:09.587875 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:00:09.587886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 04:00:09.587896 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:00:09.587918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 04:00:09.587927 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:00:09.587936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 04:00:09.587944 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:00:09.587952 | orchestrator | 2026-02-19 04:00:09.587961 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-19 04:00:09.587970 | orchestrator | Thursday 19 February 2026 04:00:07 +0000 (0:00:02.056) 0:02:05.163 ***** 2026-02-19 04:00:09.587996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 04:00:09.588034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 04:00:09.588056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-19 04:00:09.588071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 04:00:09.588086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 04:00:09.588122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-19 04:02:13.234309 | orchestrator | 2026-02-19 04:02:13.234431 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-19 04:02:13.234452 | orchestrator | Thursday 19 February 2026 04:00:09 +0000 (0:00:02.497) 0:02:07.661 ***** 2026-02-19 04:02:13.234465 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:02:13.234477 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:02:13.234487 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:02:13.234498 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:02:13.234509 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:02:13.234521 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:02:13.234534 | orchestrator | 2026-02-19 04:02:13.234547 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-19 04:02:13.234560 | orchestrator | Thursday 19 February 2026 04:00:10 +0000 (0:00:00.705) 0:02:08.367 ***** 2026-02-19 04:02:13.234571 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:02:13.234586 | orchestrator | 2026-02-19 04:02:13.234598 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-19 04:02:13.234611 | orchestrator | Thursday 19 February 2026 04:00:12 +0000 (0:00:02.276) 0:02:10.643 ***** 2026-02-19 04:02:13.234624 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:02:13.234637 | orchestrator | 2026-02-19 04:02:13.234650 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-19 04:02:13.234663 | orchestrator | Thursday 19 February 2026 04:00:14 +0000 (0:00:02.411) 0:02:13.055 ***** 2026-02-19 04:02:13.234675 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:02:13.234687 | orchestrator | 2026-02-19 04:02:13.234700 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-19 04:02:13.234713 | orchestrator | Thursday 19 February 2026 04:00:57 +0000 (0:00:42.048) 0:02:55.103 ***** 2026-02-19 04:02:13.234726 | orchestrator | 2026-02-19 04:02:13.234740 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-19 04:02:13.234753 | orchestrator | Thursday 19 February 2026 04:00:57 +0000 (0:00:00.085) 0:02:55.188 ***** 2026-02-19 04:02:13.234766 | orchestrator | 2026-02-19 04:02:13.234780 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-19 04:02:13.234793 | orchestrator | Thursday 19 February 2026 04:00:57 +0000 (0:00:00.068) 0:02:55.256 ***** 2026-02-19 04:02:13.234804 | orchestrator | 2026-02-19 04:02:13.234836 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-19 04:02:13.234852 | orchestrator | Thursday 19 February 2026 04:00:57 +0000 (0:00:00.067) 0:02:55.323 ***** 2026-02-19 04:02:13.234866 | orchestrator | 2026-02-19 04:02:13.234880 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-19 04:02:13.234893 | orchestrator | Thursday 19 February 2026 04:00:57 +0000 (0:00:00.070) 0:02:55.394 ***** 2026-02-19 04:02:13.234905 | orchestrator | 2026-02-19 04:02:13.234917 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-19 04:02:13.234930 | orchestrator | Thursday 19 February 2026 04:00:57 +0000 (0:00:00.066) 0:02:55.460 ***** 2026-02-19 04:02:13.234944 | orchestrator | 2026-02-19 04:02:13.234986 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-19 04:02:13.235001 | orchestrator | Thursday 19 February 2026 04:00:57 +0000 (0:00:00.070) 0:02:55.531 ***** 2026-02-19 04:02:13.235016 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:02:13.235058 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:02:13.235072 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:02:13.235086 | orchestrator | 2026-02-19 04:02:13.235099 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-19 04:02:13.235113 | orchestrator | Thursday 19 February 2026 04:01:19 +0000 (0:00:22.072) 0:03:17.604 ***** 2026-02-19 04:02:13.235126 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:02:13.235140 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:02:13.235152 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:02:13.235164 | orchestrator | 2026-02-19 04:02:13.235176 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:02:13.235190 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-19 04:02:13.235205 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-19 04:02:13.235235 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-19 04:02:13.235248 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-19 04:02:13.235270 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-19 04:02:13.235283 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-19 04:02:13.235294 | orchestrator | 2026-02-19 04:02:13.235305 | orchestrator | 2026-02-19 04:02:13.235317 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:02:13.235329 | orchestrator | Thursday 19 February 2026 04:02:12 +0000 (0:00:53.013) 0:04:10.617 ***** 2026-02-19 04:02:13.235341 | orchestrator | =============================================================================== 2026-02-19 04:02:13.235353 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 53.01s 2026-02-19 04:02:13.235365 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.05s 2026-02-19 04:02:13.235376 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.07s 2026-02-19 04:02:13.235407 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.53s 2026-02-19 04:02:13.235418 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.39s 2026-02-19 04:02:13.235430 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.23s 2026-02-19 04:02:13.235442 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.25s 2026-02-19 04:02:13.235454 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.24s 2026-02-19 04:02:13.235466 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.74s 2026-02-19 04:02:13.235477 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.60s 2026-02-19 04:02:13.235489 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.39s 2026-02-19 04:02:13.235516 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.36s 2026-02-19 04:02:13.235528 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.25s 2026-02-19 04:02:13.235540 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.21s 2026-02-19 04:02:13.235562 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.85s 2026-02-19 04:02:13.235589 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.64s 2026-02-19 04:02:13.235601 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.53s 2026-02-19 04:02:13.235611 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.50s 2026-02-19 04:02:13.235618 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 2.50s 2026-02-19 04:02:13.235625 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.41s 2026-02-19 04:02:17.279919 | orchestrator | 2026-02-19 04:02:17 | INFO  | Task c676697e-a757-498d-a67d-3f90719d507f (nova) was prepared for execution. 2026-02-19 04:02:17.280079 | orchestrator | 2026-02-19 04:02:17 | INFO  | It takes a moment until task c676697e-a757-498d-a67d-3f90719d507f (nova) has been started and output is visible here. 2026-02-19 04:04:25.270187 | orchestrator | 2026-02-19 04:04:25.270291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:04:25.270305 | orchestrator | 2026-02-19 04:04:25.270314 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-19 04:04:25.270323 | orchestrator | Thursday 19 February 2026 04:02:22 +0000 (0:00:00.296) 0:00:00.296 ***** 2026-02-19 04:04:25.270332 | orchestrator | changed: [testbed-manager] 2026-02-19 04:04:25.270342 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.270350 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:04:25.270358 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:04:25.270366 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:04:25.270374 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:04:25.270382 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:04:25.270390 | orchestrator | 2026-02-19 04:04:25.270398 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:04:25.270406 | orchestrator | Thursday 19 February 2026 04:02:23 +0000 (0:00:00.926) 0:00:01.223 ***** 2026-02-19 04:04:25.270414 | orchestrator | changed: [testbed-manager] 2026-02-19 04:04:25.270422 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.270430 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:04:25.270438 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:04:25.270446 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:04:25.270454 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:04:25.270462 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:04:25.270470 | orchestrator | 2026-02-19 04:04:25.270478 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:04:25.270487 | orchestrator | Thursday 19 February 2026 04:02:24 +0000 (0:00:00.886) 0:00:02.110 ***** 2026-02-19 04:04:25.270495 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-19 04:04:25.270503 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-19 04:04:25.270511 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-19 04:04:25.270519 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-19 04:04:25.270527 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-19 04:04:25.270535 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-19 04:04:25.270543 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-19 04:04:25.270551 | orchestrator | 2026-02-19 04:04:25.270559 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-19 04:04:25.270567 | orchestrator | 2026-02-19 04:04:25.270620 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-19 04:04:25.270629 | orchestrator | Thursday 19 February 2026 04:02:24 +0000 (0:00:00.729) 0:00:02.839 ***** 2026-02-19 04:04:25.270637 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:04:25.270645 | orchestrator | 2026-02-19 04:04:25.270653 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-19 04:04:25.270683 | orchestrator | Thursday 19 February 2026 04:02:25 +0000 (0:00:00.791) 0:00:03.631 ***** 2026-02-19 04:04:25.270693 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-19 04:04:25.270710 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-19 04:04:25.270731 | orchestrator | 2026-02-19 04:04:25.270747 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-19 04:04:25.270760 | orchestrator | Thursday 19 February 2026 04:02:30 +0000 (0:00:04.536) 0:00:08.167 ***** 2026-02-19 04:04:25.270775 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 04:04:25.270789 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 04:04:25.270803 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.270818 | orchestrator | 2026-02-19 04:04:25.270832 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-19 04:04:25.270848 | orchestrator | Thursday 19 February 2026 04:02:34 +0000 (0:00:04.662) 0:00:12.830 ***** 2026-02-19 04:04:25.270862 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.270877 | orchestrator | 2026-02-19 04:04:25.270893 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-19 04:04:25.270909 | orchestrator | Thursday 19 February 2026 04:02:35 +0000 (0:00:00.668) 0:00:13.499 ***** 2026-02-19 04:04:25.270923 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.270937 | orchestrator | 2026-02-19 04:04:25.270950 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-19 04:04:25.270963 | orchestrator | Thursday 19 February 2026 04:02:36 +0000 (0:00:01.376) 0:00:14.875 ***** 2026-02-19 04:04:25.270977 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.270989 | orchestrator | 2026-02-19 04:04:25.271001 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-19 04:04:25.271014 | orchestrator | Thursday 19 February 2026 04:02:39 +0000 (0:00:02.846) 0:00:17.722 ***** 2026-02-19 04:04:25.271027 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:04:25.271040 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.271054 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.271066 | orchestrator | 2026-02-19 04:04:25.271078 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-19 04:04:25.271091 | orchestrator | Thursday 19 February 2026 04:02:39 +0000 (0:00:00.313) 0:00:18.035 ***** 2026-02-19 04:04:25.271104 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:04:25.271117 | orchestrator | 2026-02-19 04:04:25.271130 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-19 04:04:25.271144 | orchestrator | Thursday 19 February 2026 04:03:14 +0000 (0:00:34.048) 0:00:52.084 ***** 2026-02-19 04:04:25.271157 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.271171 | orchestrator | 2026-02-19 04:04:25.271184 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-19 04:04:25.271199 | orchestrator | Thursday 19 February 2026 04:03:30 +0000 (0:00:16.148) 0:01:08.233 ***** 2026-02-19 04:04:25.271215 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:04:25.271229 | orchestrator | 2026-02-19 04:04:25.271241 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-19 04:04:25.271265 | orchestrator | Thursday 19 February 2026 04:03:43 +0000 (0:00:13.437) 0:01:21.670 ***** 2026-02-19 04:04:25.271291 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:04:25.271300 | orchestrator | 2026-02-19 04:04:25.271308 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-19 04:04:25.271316 | orchestrator | Thursday 19 February 2026 04:03:44 +0000 (0:00:00.615) 0:01:22.285 ***** 2026-02-19 04:04:25.271323 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:04:25.271331 | orchestrator | 2026-02-19 04:04:25.271339 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-19 04:04:25.271347 | orchestrator | Thursday 19 February 2026 04:03:44 +0000 (0:00:00.440) 0:01:22.726 ***** 2026-02-19 04:04:25.271355 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:04:25.271374 | orchestrator | 2026-02-19 04:04:25.271381 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-19 04:04:25.271389 | orchestrator | Thursday 19 February 2026 04:03:45 +0000 (0:00:00.600) 0:01:23.326 ***** 2026-02-19 04:04:25.271397 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:04:25.271405 | orchestrator | 2026-02-19 04:04:25.271413 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-19 04:04:25.271420 | orchestrator | Thursday 19 February 2026 04:04:04 +0000 (0:00:19.534) 0:01:42.861 ***** 2026-02-19 04:04:25.271428 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:04:25.271436 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.271444 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.271452 | orchestrator | 2026-02-19 04:04:25.271466 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-19 04:04:25.271479 | orchestrator | 2026-02-19 04:04:25.271492 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-19 04:04:25.271504 | orchestrator | Thursday 19 February 2026 04:04:05 +0000 (0:00:00.339) 0:01:43.200 ***** 2026-02-19 04:04:25.271515 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:04:25.271527 | orchestrator | 2026-02-19 04:04:25.271540 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-19 04:04:25.271638 | orchestrator | Thursday 19 February 2026 04:04:05 +0000 (0:00:00.818) 0:01:44.018 ***** 2026-02-19 04:04:25.271656 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.271667 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.271679 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.271692 | orchestrator | 2026-02-19 04:04:25.271706 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-19 04:04:25.271721 | orchestrator | Thursday 19 February 2026 04:04:08 +0000 (0:00:02.521) 0:01:46.540 ***** 2026-02-19 04:04:25.271734 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.271749 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.271761 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.271774 | orchestrator | 2026-02-19 04:04:25.271786 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-19 04:04:25.271794 | orchestrator | Thursday 19 February 2026 04:04:10 +0000 (0:00:02.474) 0:01:49.014 ***** 2026-02-19 04:04:25.271806 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:04:25.271820 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.271833 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.271845 | orchestrator | 2026-02-19 04:04:25.271859 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-19 04:04:25.271871 | orchestrator | Thursday 19 February 2026 04:04:11 +0000 (0:00:00.556) 0:01:49.571 ***** 2026-02-19 04:04:25.271884 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-19 04:04:25.271898 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.271912 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-19 04:04:25.271925 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.271939 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-19 04:04:25.271952 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-19 04:04:25.271966 | orchestrator | 2026-02-19 04:04:25.271979 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-19 04:04:25.271989 | orchestrator | Thursday 19 February 2026 04:04:19 +0000 (0:00:08.330) 0:01:57.902 ***** 2026-02-19 04:04:25.271997 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:04:25.272005 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.272013 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.272021 | orchestrator | 2026-02-19 04:04:25.272029 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-19 04:04:25.272036 | orchestrator | Thursday 19 February 2026 04:04:20 +0000 (0:00:00.330) 0:01:58.232 ***** 2026-02-19 04:04:25.272044 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-19 04:04:25.272060 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:04:25.272068 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-19 04:04:25.272076 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.272084 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-19 04:04:25.272092 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.272100 | orchestrator | 2026-02-19 04:04:25.272108 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-19 04:04:25.272116 | orchestrator | Thursday 19 February 2026 04:04:21 +0000 (0:00:01.114) 0:01:59.347 ***** 2026-02-19 04:04:25.272124 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.272131 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.272139 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.272147 | orchestrator | 2026-02-19 04:04:25.272155 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-19 04:04:25.272163 | orchestrator | Thursday 19 February 2026 04:04:21 +0000 (0:00:00.485) 0:01:59.832 ***** 2026-02-19 04:04:25.272171 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.272178 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.272186 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:04:25.272194 | orchestrator | 2026-02-19 04:04:25.272202 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-19 04:04:25.272210 | orchestrator | Thursday 19 February 2026 04:04:22 +0000 (0:00:01.043) 0:02:00.876 ***** 2026-02-19 04:04:25.272218 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:04:25.272226 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:04:25.272245 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:05:47.456237 | orchestrator | 2026-02-19 04:05:47.456463 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-19 04:05:47.456487 | orchestrator | Thursday 19 February 2026 04:04:25 +0000 (0:00:02.452) 0:02:03.329 ***** 2026-02-19 04:05:47.456499 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:47.456510 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:47.456520 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:05:47.456531 | orchestrator | 2026-02-19 04:05:47.456542 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-19 04:05:47.456552 | orchestrator | Thursday 19 February 2026 04:04:47 +0000 (0:00:22.185) 0:02:25.514 ***** 2026-02-19 04:05:47.456561 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:47.456571 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:47.456581 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:05:47.456590 | orchestrator | 2026-02-19 04:05:47.456600 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-19 04:05:47.456609 | orchestrator | Thursday 19 February 2026 04:05:00 +0000 (0:00:13.321) 0:02:38.836 ***** 2026-02-19 04:05:47.456619 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:05:47.456629 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:47.456639 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:47.456648 | orchestrator | 2026-02-19 04:05:47.456658 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-19 04:05:47.456668 | orchestrator | Thursday 19 February 2026 04:05:01 +0000 (0:00:01.092) 0:02:39.928 ***** 2026-02-19 04:05:47.456678 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:47.456688 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:47.456697 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:05:47.456707 | orchestrator | 2026-02-19 04:05:47.456717 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-19 04:05:47.456726 | orchestrator | Thursday 19 February 2026 04:05:15 +0000 (0:00:13.302) 0:02:53.230 ***** 2026-02-19 04:05:47.456736 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:05:47.456745 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:47.456755 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:47.456765 | orchestrator | 2026-02-19 04:05:47.456776 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-19 04:05:47.456812 | orchestrator | Thursday 19 February 2026 04:05:16 +0000 (0:00:01.076) 0:02:54.307 ***** 2026-02-19 04:05:47.456823 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:05:47.456834 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:47.456846 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:47.456857 | orchestrator | 2026-02-19 04:05:47.456868 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-19 04:05:47.456879 | orchestrator | 2026-02-19 04:05:47.456891 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-19 04:05:47.456902 | orchestrator | Thursday 19 February 2026 04:05:16 +0000 (0:00:00.328) 0:02:54.635 ***** 2026-02-19 04:05:47.456913 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:05:47.456926 | orchestrator | 2026-02-19 04:05:47.456943 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-19 04:05:47.456964 | orchestrator | Thursday 19 February 2026 04:05:17 +0000 (0:00:00.800) 0:02:55.435 ***** 2026-02-19 04:05:47.456988 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-19 04:05:47.457006 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-19 04:05:47.457021 | orchestrator | 2026-02-19 04:05:47.457037 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-19 04:05:47.457054 | orchestrator | Thursday 19 February 2026 04:05:20 +0000 (0:00:03.430) 0:02:58.865 ***** 2026-02-19 04:05:47.457072 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-19 04:05:47.457209 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-19 04:05:47.457233 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-19 04:05:47.457244 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-19 04:05:47.457255 | orchestrator | 2026-02-19 04:05:47.457265 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-19 04:05:47.457276 | orchestrator | Thursday 19 February 2026 04:05:27 +0000 (0:00:06.907) 0:03:05.773 ***** 2026-02-19 04:05:47.457292 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:05:47.457309 | orchestrator | 2026-02-19 04:05:47.457323 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-19 04:05:47.457368 | orchestrator | Thursday 19 February 2026 04:05:31 +0000 (0:00:03.347) 0:03:09.121 ***** 2026-02-19 04:05:47.457390 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:05:47.457406 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-19 04:05:47.457421 | orchestrator | 2026-02-19 04:05:47.457437 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-19 04:05:47.457451 | orchestrator | Thursday 19 February 2026 04:05:35 +0000 (0:00:04.050) 0:03:13.171 ***** 2026-02-19 04:05:47.457468 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:05:47.457483 | orchestrator | 2026-02-19 04:05:47.457500 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-19 04:05:47.457518 | orchestrator | Thursday 19 February 2026 04:05:38 +0000 (0:00:03.380) 0:03:16.552 ***** 2026-02-19 04:05:47.457535 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-19 04:05:47.457551 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-19 04:05:47.457567 | orchestrator | 2026-02-19 04:05:47.457593 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-19 04:05:47.457633 | orchestrator | Thursday 19 February 2026 04:05:46 +0000 (0:00:07.630) 0:03:24.182 ***** 2026-02-19 04:05:47.457650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:47.457694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:47.457707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:47.457733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:05:52.124192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:05:52.124265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:05:52.124272 | orchestrator | 2026-02-19 04:05:52.124279 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-19 04:05:52.124284 | orchestrator | Thursday 19 February 2026 04:05:47 +0000 (0:00:01.338) 0:03:25.520 ***** 2026-02-19 04:05:52.124289 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:05:52.124295 | orchestrator | 2026-02-19 04:05:52.124299 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-19 04:05:52.124303 | orchestrator | Thursday 19 February 2026 04:05:47 +0000 (0:00:00.148) 0:03:25.668 ***** 2026-02-19 04:05:52.124307 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:05:52.124311 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:52.124315 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:52.124319 | orchestrator | 2026-02-19 04:05:52.124324 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-19 04:05:52.124388 | orchestrator | Thursday 19 February 2026 04:05:47 +0000 (0:00:00.315) 0:03:25.984 ***** 2026-02-19 04:05:52.124393 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:05:52.124398 | orchestrator | 2026-02-19 04:05:52.124402 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-19 04:05:52.124406 | orchestrator | Thursday 19 February 2026 04:05:48 +0000 (0:00:00.775) 0:03:26.760 ***** 2026-02-19 04:05:52.124410 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:05:52.124414 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:52.124418 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:52.124422 | orchestrator | 2026-02-19 04:05:52.124426 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-19 04:05:52.124430 | orchestrator | Thursday 19 February 2026 04:05:49 +0000 (0:00:00.503) 0:03:27.264 ***** 2026-02-19 04:05:52.124435 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:05:52.124440 | orchestrator | 2026-02-19 04:05:52.124445 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-19 04:05:52.124449 | orchestrator | Thursday 19 February 2026 04:05:49 +0000 (0:00:00.653) 0:03:27.917 ***** 2026-02-19 04:05:52.124470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:52.124506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:52.124512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:52.124517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:05:52.124522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:05:52.124534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:05:52.124539 | orchestrator | 2026-02-19 04:05:52.124546 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-19 04:05:53.850183 | orchestrator | Thursday 19 February 2026 04:05:52 +0000 (0:00:02.275) 0:03:30.193 ***** 2026-02-19 04:05:53.850288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:05:53.850304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:05:53.850316 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:05:53.850382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:05:53.850428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:05:53.850439 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:53.850465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:05:53.850476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:05:53.850484 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:53.850494 | orchestrator | 2026-02-19 04:05:53.850503 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-19 04:05:53.850512 | orchestrator | Thursday 19 February 2026 04:05:52 +0000 (0:00:00.873) 0:03:31.067 ***** 2026-02-19 04:05:53.850521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:05:53.850538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:05:53.850546 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:05:53.850566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:05:56.202291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:05:56.202445 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:05:56.202461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:05:56.202492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:05:56.202500 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:05:56.202508 | orchestrator | 2026-02-19 04:05:56.202517 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-19 04:05:56.202526 | orchestrator | Thursday 19 February 2026 04:05:53 +0000 (0:00:00.854) 0:03:31.921 ***** 2026-02-19 04:05:56.202547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:56.202573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:56.202582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:05:56.202600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:05:56.202609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:05:56.202623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:02.756581 | orchestrator | 2026-02-19 04:06:02.756675 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-19 04:06:02.756687 | orchestrator | Thursday 19 February 2026 04:05:56 +0000 (0:00:02.346) 0:03:34.267 ***** 2026-02-19 04:06:02.756699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:06:02.756730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:06:02.756751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:06:02.756774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:02.756784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:02.756798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:02.756805 | orchestrator | 2026-02-19 04:06:02.756812 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-19 04:06:02.756819 | orchestrator | Thursday 19 February 2026 04:06:02 +0000 (0:00:05.940) 0:03:40.207 ***** 2026-02-19 04:06:02.756842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:06:02.756850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:06:02.756858 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:06:02.756874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:06:07.368050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:06:07.368131 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:06:07.368141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-19 04:06:07.368161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:06:07.368167 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:06:07.368171 | orchestrator | 2026-02-19 04:06:07.368177 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-19 04:06:07.368183 | orchestrator | Thursday 19 February 2026 04:06:02 +0000 (0:00:00.617) 0:03:40.825 ***** 2026-02-19 04:06:07.368188 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:06:07.368193 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:06:07.368197 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:06:07.368202 | orchestrator | 2026-02-19 04:06:07.368206 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-19 04:06:07.368211 | orchestrator | Thursday 19 February 2026 04:06:04 +0000 (0:00:01.586) 0:03:42.412 ***** 2026-02-19 04:06:07.368215 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:06:07.368220 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:06:07.368224 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:06:07.368229 | orchestrator | 2026-02-19 04:06:07.368233 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-19 04:06:07.368238 | orchestrator | Thursday 19 February 2026 04:06:04 +0000 (0:00:00.387) 0:03:42.799 ***** 2026-02-19 04:06:07.368270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:06:07.368276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:06:07.368285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-19 04:06:07.368316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:07.368331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:07.368344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:46.569838 | orchestrator | 2026-02-19 04:06:46.569922 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-19 04:06:46.569930 | orchestrator | Thursday 19 February 2026 04:06:06 +0000 (0:00:02.213) 0:03:45.013 ***** 2026-02-19 04:06:46.569934 | orchestrator | 2026-02-19 04:06:46.569938 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-19 04:06:46.569942 | orchestrator | Thursday 19 February 2026 04:06:07 +0000 (0:00:00.140) 0:03:45.153 ***** 2026-02-19 04:06:46.569946 | orchestrator | 2026-02-19 04:06:46.569950 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-19 04:06:46.569955 | orchestrator | Thursday 19 February 2026 04:06:07 +0000 (0:00:00.138) 0:03:45.292 ***** 2026-02-19 04:06:46.569959 | orchestrator | 2026-02-19 04:06:46.569963 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-19 04:06:46.569966 | orchestrator | Thursday 19 February 2026 04:06:07 +0000 (0:00:00.139) 0:03:45.431 ***** 2026-02-19 04:06:46.569970 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:06:46.569976 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:06:46.569980 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:06:46.569983 | orchestrator | 2026-02-19 04:06:46.569987 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-19 04:06:46.569991 | orchestrator | Thursday 19 February 2026 04:06:29 +0000 (0:00:21.947) 0:04:07.379 ***** 2026-02-19 04:06:46.569996 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:06:46.570002 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:06:46.570007 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:06:46.570050 | orchestrator | 2026-02-19 04:06:46.570055 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-19 04:06:46.570059 | orchestrator | 2026-02-19 04:06:46.570063 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-19 04:06:46.570067 | orchestrator | Thursday 19 February 2026 04:06:34 +0000 (0:00:05.312) 0:04:12.692 ***** 2026-02-19 04:06:46.570072 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:06:46.570077 | orchestrator | 2026-02-19 04:06:46.570096 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-19 04:06:46.570152 | orchestrator | Thursday 19 February 2026 04:06:35 +0000 (0:00:01.246) 0:04:13.938 ***** 2026-02-19 04:06:46.570173 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:06:46.570177 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:06:46.570181 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:06:46.570244 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:06:46.570251 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:06:46.570257 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:06:46.570263 | orchestrator | 2026-02-19 04:06:46.570269 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-19 04:06:46.570276 | orchestrator | Thursday 19 February 2026 04:06:36 +0000 (0:00:00.797) 0:04:14.736 ***** 2026-02-19 04:06:46.570282 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:06:46.570288 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:06:46.570296 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:06:46.570300 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:06:46.570304 | orchestrator | 2026-02-19 04:06:46.570308 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-19 04:06:46.570312 | orchestrator | Thursday 19 February 2026 04:06:37 +0000 (0:00:00.893) 0:04:15.629 ***** 2026-02-19 04:06:46.570317 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-19 04:06:46.570321 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-19 04:06:46.570324 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-19 04:06:46.570328 | orchestrator | 2026-02-19 04:06:46.570332 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-19 04:06:46.570336 | orchestrator | Thursday 19 February 2026 04:06:38 +0000 (0:00:00.934) 0:04:16.564 ***** 2026-02-19 04:06:46.570340 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-19 04:06:46.570344 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-19 04:06:46.570347 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-19 04:06:46.570351 | orchestrator | 2026-02-19 04:06:46.570355 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-19 04:06:46.570359 | orchestrator | Thursday 19 February 2026 04:06:39 +0000 (0:00:01.252) 0:04:17.816 ***** 2026-02-19 04:06:46.570363 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-19 04:06:46.570366 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:06:46.570370 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-19 04:06:46.570374 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:06:46.570378 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-19 04:06:46.570381 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:06:46.570385 | orchestrator | 2026-02-19 04:06:46.570389 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-19 04:06:46.570393 | orchestrator | Thursday 19 February 2026 04:06:40 +0000 (0:00:00.566) 0:04:18.383 ***** 2026-02-19 04:06:46.570396 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-19 04:06:46.570400 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-19 04:06:46.570405 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 04:06:46.570410 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 04:06:46.570414 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:06:46.570419 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 04:06:46.570423 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 04:06:46.570427 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:06:46.570445 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 04:06:46.570450 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 04:06:46.570454 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:06:46.570458 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-19 04:06:46.570468 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-19 04:06:46.570473 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-19 04:06:46.570477 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-19 04:06:46.570481 | orchestrator | 2026-02-19 04:06:46.570486 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-19 04:06:46.570490 | orchestrator | Thursday 19 February 2026 04:06:41 +0000 (0:00:01.282) 0:04:19.666 ***** 2026-02-19 04:06:46.570494 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:06:46.570499 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:06:46.570503 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:06:46.570507 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:06:46.570511 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:06:46.570516 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:06:46.570520 | orchestrator | 2026-02-19 04:06:46.570524 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-19 04:06:46.570529 | orchestrator | Thursday 19 February 2026 04:06:42 +0000 (0:00:01.221) 0:04:20.887 ***** 2026-02-19 04:06:46.570533 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:06:46.570537 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:06:46.570541 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:06:46.570545 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:06:46.570549 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:06:46.570553 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:06:46.570558 | orchestrator | 2026-02-19 04:06:46.570563 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-19 04:06:46.570572 | orchestrator | Thursday 19 February 2026 04:06:44 +0000 (0:00:01.819) 0:04:22.706 ***** 2026-02-19 04:06:46.570579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:06:46.570587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:06:46.570595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:48.364853 | orchestrator | 2026-02-19 04:06:48.364877 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-19 04:06:48.364901 | orchestrator | Thursday 19 February 2026 04:06:47 +0000 (0:00:02.392) 0:04:25.099 ***** 2026-02-19 04:06:48.364926 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:06:48.364946 | orchestrator | 2026-02-19 04:06:48.364960 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-19 04:06:48.364982 | orchestrator | Thursday 19 February 2026 04:06:48 +0000 (0:00:01.332) 0:04:26.431 ***** 2026-02-19 04:06:51.683607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:51.683883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:53.455065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:53.455157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:06:53.455167 | orchestrator | 2026-02-19 04:06:53.455175 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-19 04:06:53.455343 | orchestrator | Thursday 19 February 2026 04:06:52 +0000 (0:00:03.666) 0:04:30.097 ***** 2026-02-19 04:06:53.455357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:06:53.455366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:06:53.455374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:06:53.455380 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:06:53.455407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:06:53.455414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:06:53.455420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:06:53.455432 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:06:53.455438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:06:53.455444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:06:53.455456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:06:54.956053 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:06:54.956302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:06:54.956331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:06:54.956374 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:06:54.956401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:06:54.956416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:06:54.956431 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:06:54.956445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:06:54.956458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:06:54.956473 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:06:54.956488 | orchestrator | 2026-02-19 04:06:54.956506 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-19 04:06:54.956521 | orchestrator | Thursday 19 February 2026 04:06:53 +0000 (0:00:01.591) 0:04:31.689 ***** 2026-02-19 04:06:54.956558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:06:54.956580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:06:54.956592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:06:54.956602 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:06:54.956612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:06:54.956623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:06:54.956644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:07:02.664106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:07:02.664247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:07:02.664268 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:02.664279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:07:02.664287 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:02.664296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:07:02.664305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:07:02.664313 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:02.664364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:07:02.664406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:07:02.664418 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:02.664430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:07:02.664442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:07:02.664454 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:02.664465 | orchestrator | 2026-02-19 04:07:02.664479 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-19 04:07:02.664493 | orchestrator | Thursday 19 February 2026 04:06:55 +0000 (0:00:02.075) 0:04:33.764 ***** 2026-02-19 04:07:02.664506 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:02.664519 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:02.664532 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:02.664544 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:07:02.664557 | orchestrator | 2026-02-19 04:07:02.664569 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-19 04:07:02.664582 | orchestrator | Thursday 19 February 2026 04:06:56 +0000 (0:00:01.133) 0:04:34.898 ***** 2026-02-19 04:07:02.664595 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:07:02.664633 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 04:07:02.664646 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 04:07:02.664657 | orchestrator | 2026-02-19 04:07:02.664664 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-19 04:07:02.664671 | orchestrator | Thursday 19 February 2026 04:06:57 +0000 (0:00:01.157) 0:04:36.055 ***** 2026-02-19 04:07:02.664678 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:07:02.664686 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 04:07:02.664693 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 04:07:02.664700 | orchestrator | 2026-02-19 04:07:02.664707 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-19 04:07:02.664723 | orchestrator | Thursday 19 February 2026 04:06:58 +0000 (0:00:00.962) 0:04:37.018 ***** 2026-02-19 04:07:02.664730 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:07:02.664738 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:07:02.664745 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:07:02.664752 | orchestrator | 2026-02-19 04:07:02.664759 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-19 04:07:02.664767 | orchestrator | Thursday 19 February 2026 04:06:59 +0000 (0:00:00.554) 0:04:37.572 ***** 2026-02-19 04:07:02.664774 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:07:02.664781 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:07:02.664788 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:07:02.664795 | orchestrator | 2026-02-19 04:07:02.664802 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-19 04:07:02.664809 | orchestrator | Thursday 19 February 2026 04:07:00 +0000 (0:00:00.521) 0:04:38.093 ***** 2026-02-19 04:07:02.664816 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-19 04:07:02.664824 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-19 04:07:02.664831 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-19 04:07:02.664838 | orchestrator | 2026-02-19 04:07:02.664852 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-19 04:07:02.664863 | orchestrator | Thursday 19 February 2026 04:07:01 +0000 (0:00:01.379) 0:04:39.473 ***** 2026-02-19 04:07:02.664876 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-19 04:07:02.664889 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-19 04:07:02.664911 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-19 04:07:22.147763 | orchestrator | 2026-02-19 04:07:22.147887 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-19 04:07:22.147899 | orchestrator | Thursday 19 February 2026 04:07:02 +0000 (0:00:01.258) 0:04:40.731 ***** 2026-02-19 04:07:22.147907 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-19 04:07:22.147914 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-19 04:07:22.147921 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-19 04:07:22.147927 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-19 04:07:22.147933 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-19 04:07:22.147940 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-19 04:07:22.147947 | orchestrator | 2026-02-19 04:07:22.147954 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-19 04:07:22.147960 | orchestrator | Thursday 19 February 2026 04:07:06 +0000 (0:00:03.883) 0:04:44.615 ***** 2026-02-19 04:07:22.147967 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:22.147977 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:22.147984 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:22.147990 | orchestrator | 2026-02-19 04:07:22.147997 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-19 04:07:22.148002 | orchestrator | Thursday 19 February 2026 04:07:06 +0000 (0:00:00.342) 0:04:44.958 ***** 2026-02-19 04:07:22.148007 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:22.148013 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:22.148020 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:22.148026 | orchestrator | 2026-02-19 04:07:22.148032 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-19 04:07:22.148038 | orchestrator | Thursday 19 February 2026 04:07:07 +0000 (0:00:00.574) 0:04:45.532 ***** 2026-02-19 04:07:22.148044 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:07:22.148051 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:07:22.148058 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:07:22.148065 | orchestrator | 2026-02-19 04:07:22.148072 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-19 04:07:22.148101 | orchestrator | Thursday 19 February 2026 04:07:08 +0000 (0:00:01.405) 0:04:46.938 ***** 2026-02-19 04:07:22.148107 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-19 04:07:22.148147 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-19 04:07:22.148153 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-19 04:07:22.148158 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-19 04:07:22.148163 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-19 04:07:22.148168 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-19 04:07:22.148173 | orchestrator | 2026-02-19 04:07:22.148177 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-19 04:07:22.148182 | orchestrator | Thursday 19 February 2026 04:07:12 +0000 (0:00:03.403) 0:04:50.341 ***** 2026-02-19 04:07:22.148186 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-19 04:07:22.148190 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-19 04:07:22.148195 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-19 04:07:22.148199 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-19 04:07:22.148203 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:07:22.148208 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-19 04:07:22.148212 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:07:22.148216 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-19 04:07:22.148221 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:07:22.148225 | orchestrator | 2026-02-19 04:07:22.148229 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-19 04:07:22.148234 | orchestrator | Thursday 19 February 2026 04:07:15 +0000 (0:00:03.641) 0:04:53.983 ***** 2026-02-19 04:07:22.148238 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:22.148243 | orchestrator | 2026-02-19 04:07:22.148247 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-19 04:07:22.148252 | orchestrator | Thursday 19 February 2026 04:07:16 +0000 (0:00:00.134) 0:04:54.117 ***** 2026-02-19 04:07:22.148256 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:22.148261 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:22.148265 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:22.148269 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:22.148273 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:22.148278 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:22.148282 | orchestrator | 2026-02-19 04:07:22.148286 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-19 04:07:22.148291 | orchestrator | Thursday 19 February 2026 04:07:16 +0000 (0:00:00.872) 0:04:54.990 ***** 2026-02-19 04:07:22.148295 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:07:22.148299 | orchestrator | 2026-02-19 04:07:22.148315 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-19 04:07:22.148319 | orchestrator | Thursday 19 February 2026 04:07:17 +0000 (0:00:00.737) 0:04:55.727 ***** 2026-02-19 04:07:22.148323 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:22.148328 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:22.148332 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:22.148336 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:22.148354 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:22.148359 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:22.148363 | orchestrator | 2026-02-19 04:07:22.148373 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-19 04:07:22.148377 | orchestrator | Thursday 19 February 2026 04:07:18 +0000 (0:00:00.930) 0:04:56.658 ***** 2026-02-19 04:07:22.148385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:07:22.148393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:07:22.148398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:07:22.148404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:07:22.148416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:07:28.743882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:07:28.743962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:07:28.743972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:07:28.743977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:07:28.743983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:28.743988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:28.744016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:28.744038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:28.744045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:28.744051 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:28.744056 | orchestrator | 2026-02-19 04:07:28.744064 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-19 04:07:28.744070 | orchestrator | Thursday 19 February 2026 04:07:22 +0000 (0:00:03.651) 0:05:00.309 ***** 2026-02-19 04:07:28.744076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:07:28.744085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:07:28.744148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:07:29.085581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:07:29.085676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:07:29.085690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:07:29.085701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:07:29.085904 | orchestrator | 2026-02-19 04:07:29.085944 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-19 04:07:29.085971 | orchestrator | Thursday 19 February 2026 04:07:29 +0000 (0:00:06.845) 0:05:07.154 ***** 2026-02-19 04:07:51.042001 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:51.042155 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:51.042168 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:51.042174 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:51.042181 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:51.042188 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:51.042195 | orchestrator | 2026-02-19 04:07:51.042204 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-19 04:07:51.042213 | orchestrator | Thursday 19 February 2026 04:07:30 +0000 (0:00:01.270) 0:05:08.424 ***** 2026-02-19 04:07:51.042219 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-19 04:07:51.042227 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-19 04:07:51.042234 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-19 04:07:51.042241 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-19 04:07:51.042248 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-19 04:07:51.042256 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-19 04:07:51.042264 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-19 04:07:51.042285 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:51.042299 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-19 04:07:51.042306 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:51.042314 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-19 04:07:51.042321 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:51.042328 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-19 04:07:51.042359 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-19 04:07:51.042367 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-19 04:07:51.042375 | orchestrator | 2026-02-19 04:07:51.042382 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-19 04:07:51.042388 | orchestrator | Thursday 19 February 2026 04:07:34 +0000 (0:00:03.718) 0:05:12.143 ***** 2026-02-19 04:07:51.042393 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:51.042397 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:51.042401 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:51.042405 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:51.042410 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:51.042414 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:51.042418 | orchestrator | 2026-02-19 04:07:51.042422 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-19 04:07:51.042426 | orchestrator | Thursday 19 February 2026 04:07:34 +0000 (0:00:00.631) 0:05:12.774 ***** 2026-02-19 04:07:51.042431 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-19 04:07:51.042436 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-19 04:07:51.042440 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-19 04:07:51.042444 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-19 04:07:51.042448 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-19 04:07:51.042464 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-19 04:07:51.042468 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-19 04:07:51.042472 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-19 04:07:51.042476 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-19 04:07:51.042480 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-19 04:07:51.042484 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:51.042488 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-19 04:07:51.042492 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:51.042496 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-19 04:07:51.042500 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:51.042504 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-19 04:07:51.042508 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-19 04:07:51.042526 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-19 04:07:51.042530 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-19 04:07:51.042534 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-19 04:07:51.042538 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-19 04:07:51.042542 | orchestrator | 2026-02-19 04:07:51.042553 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-19 04:07:51.042558 | orchestrator | Thursday 19 February 2026 04:07:40 +0000 (0:00:05.558) 0:05:18.333 ***** 2026-02-19 04:07:51.042563 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 04:07:51.042568 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 04:07:51.042572 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 04:07:51.042577 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-19 04:07:51.042583 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-19 04:07:51.042588 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-19 04:07:51.042592 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-19 04:07:51.042597 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-19 04:07:51.042602 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-19 04:07:51.042607 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 04:07:51.042613 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 04:07:51.042620 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 04:07:51.042626 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-19 04:07:51.042633 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:51.042641 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-19 04:07:51.042648 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:51.042656 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-19 04:07:51.042663 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:51.042670 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-19 04:07:51.042677 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-19 04:07:51.042682 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-19 04:07:51.042687 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-19 04:07:51.042692 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-19 04:07:51.042696 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-19 04:07:51.042700 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-19 04:07:51.042704 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-19 04:07:51.042711 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-19 04:07:51.042715 | orchestrator | 2026-02-19 04:07:51.042719 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-19 04:07:51.042723 | orchestrator | Thursday 19 February 2026 04:07:47 +0000 (0:00:07.385) 0:05:25.718 ***** 2026-02-19 04:07:51.042727 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:51.042731 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:51.042736 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:51.042740 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:51.042744 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:51.042748 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:51.042752 | orchestrator | 2026-02-19 04:07:51.042756 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-19 04:07:51.042764 | orchestrator | Thursday 19 February 2026 04:07:48 +0000 (0:00:00.782) 0:05:26.501 ***** 2026-02-19 04:07:51.042768 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:51.042772 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:51.042776 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:51.042780 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:51.042784 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:51.042788 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:51.042792 | orchestrator | 2026-02-19 04:07:51.042796 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-19 04:07:51.042800 | orchestrator | Thursday 19 February 2026 04:07:49 +0000 (0:00:00.656) 0:05:27.158 ***** 2026-02-19 04:07:51.042804 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:51.042808 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:07:51.042812 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:51.042817 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:51.042821 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:07:51.042825 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:07:51.042829 | orchestrator | 2026-02-19 04:07:51.042836 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-19 04:07:52.181180 | orchestrator | Thursday 19 February 2026 04:07:51 +0000 (0:00:01.940) 0:05:29.098 ***** 2026-02-19 04:07:52.181268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:07:52.181281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:07:52.181291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:07:52.181299 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:52.181324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:07:52.181352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:07:52.181374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-19 04:07:52.181382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:07:52.181390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-19 04:07:52.181398 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:52.181409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-19 04:07:52.181422 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:52.181430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:07:52.181444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:07:55.786792 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:55.786896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:07:55.786908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:07:55.786914 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:55.786919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-19 04:07:55.786925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:07:55.786955 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:55.786965 | orchestrator | 2026-02-19 04:07:55.786974 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-19 04:07:55.786984 | orchestrator | Thursday 19 February 2026 04:07:52 +0000 (0:00:01.469) 0:05:30.567 ***** 2026-02-19 04:07:55.787006 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-19 04:07:55.787016 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-19 04:07:55.787024 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:07:55.787032 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-19 04:07:55.787040 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-19 04:07:55.787069 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:07:55.787077 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-19 04:07:55.787085 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-19 04:07:55.787093 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:07:55.787102 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-19 04:07:55.787107 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-19 04:07:55.787111 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:07:55.787116 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-19 04:07:55.787121 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-19 04:07:55.787126 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:07:55.787131 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-19 04:07:55.787135 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-19 04:07:55.787140 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:07:55.787145 | orchestrator | 2026-02-19 04:07:55.787150 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-19 04:07:55.787155 | orchestrator | Thursday 19 February 2026 04:07:53 +0000 (0:00:00.896) 0:05:31.463 ***** 2026-02-19 04:07:55.787174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:07:55.787182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:07:55.787194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-19 04:07:55.787203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:07:55.787209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:07:55.787221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-19 04:08:47.217734 | orchestrator | 2026-02-19 04:08:47.217741 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-19 04:08:47.217747 | orchestrator | Thursday 19 February 2026 04:07:56 +0000 (0:00:02.687) 0:05:34.151 ***** 2026-02-19 04:08:47.217751 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:08:47.217759 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:08:47.217767 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:08:47.217774 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:08:47.217781 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:08:47.217789 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:08:47.217795 | orchestrator | 2026-02-19 04:08:47.217806 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-19 04:08:47.217813 | orchestrator | Thursday 19 February 2026 04:07:56 +0000 (0:00:00.836) 0:05:34.988 ***** 2026-02-19 04:08:47.217821 | orchestrator | 2026-02-19 04:08:47.217828 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-19 04:08:47.217840 | orchestrator | Thursday 19 February 2026 04:07:57 +0000 (0:00:00.145) 0:05:35.133 ***** 2026-02-19 04:08:47.217845 | orchestrator | 2026-02-19 04:08:47.217850 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-19 04:08:47.217855 | orchestrator | Thursday 19 February 2026 04:07:57 +0000 (0:00:00.142) 0:05:35.275 ***** 2026-02-19 04:08:47.217859 | orchestrator | 2026-02-19 04:08:47.217864 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-19 04:08:47.217868 | orchestrator | Thursday 19 February 2026 04:07:57 +0000 (0:00:00.144) 0:05:35.419 ***** 2026-02-19 04:08:47.217873 | orchestrator | 2026-02-19 04:08:47.217877 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-19 04:08:47.217882 | orchestrator | Thursday 19 February 2026 04:07:57 +0000 (0:00:00.144) 0:05:35.564 ***** 2026-02-19 04:08:47.217887 | orchestrator | 2026-02-19 04:08:47.217891 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-19 04:08:47.217896 | orchestrator | Thursday 19 February 2026 04:07:57 +0000 (0:00:00.334) 0:05:35.898 ***** 2026-02-19 04:08:47.217900 | orchestrator | 2026-02-19 04:08:47.217905 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-19 04:08:47.217909 | orchestrator | Thursday 19 February 2026 04:07:57 +0000 (0:00:00.148) 0:05:36.047 ***** 2026-02-19 04:08:47.217914 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:08:47.217918 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:08:47.217923 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:08:47.217927 | orchestrator | 2026-02-19 04:08:47.217932 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-19 04:08:47.217937 | orchestrator | Thursday 19 February 2026 04:08:09 +0000 (0:00:11.796) 0:05:47.843 ***** 2026-02-19 04:08:47.217991 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:08:47.218000 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:08:47.218008 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:08:47.218083 | orchestrator | 2026-02-19 04:08:47.218091 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-19 04:08:47.218096 | orchestrator | Thursday 19 February 2026 04:08:22 +0000 (0:00:12.997) 0:06:00.841 ***** 2026-02-19 04:08:47.218102 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:08:47.218107 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:08:47.218113 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:08:47.218118 | orchestrator | 2026-02-19 04:08:47.218130 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-19 04:10:56.908646 | orchestrator | Thursday 19 February 2026 04:08:47 +0000 (0:00:24.436) 0:06:25.278 ***** 2026-02-19 04:10:56.908770 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:10:56.908782 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:10:56.908788 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:10:56.908794 | orchestrator | 2026-02-19 04:10:56.908803 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-19 04:10:56.908815 | orchestrator | Thursday 19 February 2026 04:09:22 +0000 (0:00:35.263) 0:07:00.541 ***** 2026-02-19 04:10:56.908830 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:10:56.908838 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:10:56.908847 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:10:56.908855 | orchestrator | 2026-02-19 04:10:56.908864 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-19 04:10:56.908873 | orchestrator | Thursday 19 February 2026 04:09:23 +0000 (0:00:00.786) 0:07:01.328 ***** 2026-02-19 04:10:56.908882 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:10:56.908891 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:10:56.908900 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:10:56.908909 | orchestrator | 2026-02-19 04:10:56.908919 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-19 04:10:56.908926 | orchestrator | Thursday 19 February 2026 04:09:24 +0000 (0:00:00.787) 0:07:02.116 ***** 2026-02-19 04:10:56.908933 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:10:56.908938 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:10:56.908944 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:10:56.908950 | orchestrator | 2026-02-19 04:10:56.908956 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-19 04:10:56.908962 | orchestrator | Thursday 19 February 2026 04:09:44 +0000 (0:00:20.908) 0:07:23.024 ***** 2026-02-19 04:10:56.908967 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:10:56.908973 | orchestrator | 2026-02-19 04:10:56.908978 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-19 04:10:56.908983 | orchestrator | Thursday 19 February 2026 04:09:45 +0000 (0:00:00.146) 0:07:23.171 ***** 2026-02-19 04:10:56.908989 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:10:56.908994 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:10:56.908999 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:10:56.909064 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:10:56.909071 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:10:56.909077 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-19 04:10:56.909085 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 04:10:56.909090 | orchestrator | 2026-02-19 04:10:56.909096 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-19 04:10:56.909101 | orchestrator | Thursday 19 February 2026 04:10:08 +0000 (0:00:23.474) 0:07:46.646 ***** 2026-02-19 04:10:56.909107 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:10:56.909113 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:10:56.909118 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:10:56.909123 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:10:56.909129 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:10:56.909134 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:10:56.909157 | orchestrator | 2026-02-19 04:10:56.909163 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-19 04:10:56.909168 | orchestrator | Thursday 19 February 2026 04:10:16 +0000 (0:00:08.418) 0:07:55.064 ***** 2026-02-19 04:10:56.909174 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:10:56.909179 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:10:56.909184 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:10:56.909189 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:10:56.909207 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:10:56.909214 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-02-19 04:10:56.909220 | orchestrator | 2026-02-19 04:10:56.909226 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-19 04:10:56.909232 | orchestrator | Thursday 19 February 2026 04:10:21 +0000 (0:00:04.273) 0:07:59.338 ***** 2026-02-19 04:10:56.909238 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 04:10:56.909245 | orchestrator | 2026-02-19 04:10:56.909251 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-19 04:10:56.909257 | orchestrator | Thursday 19 February 2026 04:10:35 +0000 (0:00:14.414) 0:08:13.753 ***** 2026-02-19 04:10:56.909263 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 04:10:56.909269 | orchestrator | 2026-02-19 04:10:56.909275 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-19 04:10:56.909282 | orchestrator | Thursday 19 February 2026 04:10:37 +0000 (0:00:01.567) 0:08:15.320 ***** 2026-02-19 04:10:56.909288 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:10:56.909294 | orchestrator | 2026-02-19 04:10:56.909300 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-19 04:10:56.909306 | orchestrator | Thursday 19 February 2026 04:10:39 +0000 (0:00:01.786) 0:08:17.107 ***** 2026-02-19 04:10:56.909312 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 04:10:56.909318 | orchestrator | 2026-02-19 04:10:56.909324 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-19 04:10:56.909330 | orchestrator | Thursday 19 February 2026 04:10:51 +0000 (0:00:12.351) 0:08:29.459 ***** 2026-02-19 04:10:56.909336 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:10:56.909343 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:10:56.909349 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:10:56.909355 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:10:56.909361 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:10:56.909367 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:10:56.909373 | orchestrator | 2026-02-19 04:10:56.909380 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-19 04:10:56.909386 | orchestrator | 2026-02-19 04:10:56.909392 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-19 04:10:56.909417 | orchestrator | Thursday 19 February 2026 04:10:53 +0000 (0:00:01.870) 0:08:31.330 ***** 2026-02-19 04:10:56.909430 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:10:56.909440 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:10:56.909449 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:10:56.909459 | orchestrator | 2026-02-19 04:10:56.909468 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-19 04:10:56.909477 | orchestrator | 2026-02-19 04:10:56.909486 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-19 04:10:56.909494 | orchestrator | Thursday 19 February 2026 04:10:54 +0000 (0:00:01.009) 0:08:32.339 ***** 2026-02-19 04:10:56.909503 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:10:56.909513 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:10:56.909522 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:10:56.909531 | orchestrator | 2026-02-19 04:10:56.909540 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-19 04:10:56.909550 | orchestrator | 2026-02-19 04:10:56.909560 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-19 04:10:56.909579 | orchestrator | Thursday 19 February 2026 04:10:54 +0000 (0:00:00.701) 0:08:33.041 ***** 2026-02-19 04:10:56.909590 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-19 04:10:56.909599 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-19 04:10:56.909609 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-19 04:10:56.909619 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-19 04:10:56.909629 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-19 04:10:56.909637 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-19 04:10:56.909646 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:10:56.909655 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-19 04:10:56.909665 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-19 04:10:56.909674 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-19 04:10:56.909684 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-19 04:10:56.909693 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-19 04:10:56.909703 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-19 04:10:56.909713 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:10:56.909721 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-19 04:10:56.909803 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-19 04:10:56.909809 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-19 04:10:56.909814 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-19 04:10:56.909820 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-19 04:10:56.909825 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-19 04:10:56.909830 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:10:56.909836 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-19 04:10:56.909841 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-19 04:10:56.909846 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-19 04:10:56.909851 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-19 04:10:56.909857 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-19 04:10:56.909862 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-19 04:10:56.909867 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:10:56.909879 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-19 04:10:56.909884 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-19 04:10:56.909893 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-19 04:10:56.909903 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-19 04:10:56.909912 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-19 04:10:56.909920 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-19 04:10:56.909929 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:10:56.909938 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-19 04:10:56.909948 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-19 04:10:56.909957 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-19 04:10:56.909962 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-19 04:10:56.909968 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-19 04:10:56.909973 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-19 04:10:56.909979 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:10:56.909984 | orchestrator | 2026-02-19 04:10:56.909989 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-19 04:10:56.910000 | orchestrator | 2026-02-19 04:10:56.910006 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-19 04:10:56.910011 | orchestrator | Thursday 19 February 2026 04:10:56 +0000 (0:00:01.349) 0:08:34.390 ***** 2026-02-19 04:10:56.910061 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-19 04:10:56.910067 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-19 04:10:56.910073 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:10:56.910078 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-19 04:10:56.910083 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-19 04:10:56.910089 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:10:56.910094 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-19 04:10:56.910099 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-19 04:10:56.910104 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:10:56.910110 | orchestrator | 2026-02-19 04:10:56.910123 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-19 04:10:58.724615 | orchestrator | 2026-02-19 04:10:58.724708 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-19 04:10:58.724720 | orchestrator | Thursday 19 February 2026 04:10:56 +0000 (0:00:00.585) 0:08:34.975 ***** 2026-02-19 04:10:58.724796 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:10:58.724806 | orchestrator | 2026-02-19 04:10:58.724815 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-19 04:10:58.724823 | orchestrator | 2026-02-19 04:10:58.724831 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-19 04:10:58.724839 | orchestrator | Thursday 19 February 2026 04:10:57 +0000 (0:00:00.919) 0:08:35.895 ***** 2026-02-19 04:10:58.724847 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:10:58.724855 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:10:58.724863 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:10:58.724871 | orchestrator | 2026-02-19 04:10:58.724879 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:10:58.724887 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:10:58.724898 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-19 04:10:58.724906 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-19 04:10:58.724914 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-19 04:10:58.724922 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-19 04:10:58.724930 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-19 04:10:58.724937 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-19 04:10:58.724945 | orchestrator | 2026-02-19 04:10:58.724953 | orchestrator | 2026-02-19 04:10:58.724961 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:10:58.724969 | orchestrator | Thursday 19 February 2026 04:10:58 +0000 (0:00:00.473) 0:08:36.368 ***** 2026-02-19 04:10:58.724976 | orchestrator | =============================================================================== 2026-02-19 04:10:58.724984 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 35.26s 2026-02-19 04:10:58.724992 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.05s 2026-02-19 04:10:58.725022 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.44s 2026-02-19 04:10:58.725030 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.47s 2026-02-19 04:10:58.725039 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.19s 2026-02-19 04:10:58.725060 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.95s 2026-02-19 04:10:58.725069 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 20.91s 2026-02-19 04:10:58.725076 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.53s 2026-02-19 04:10:58.725084 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.15s 2026-02-19 04:10:58.725092 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.41s 2026-02-19 04:10:58.725100 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.44s 2026-02-19 04:10:58.725107 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.32s 2026-02-19 04:10:58.725115 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.30s 2026-02-19 04:10:58.725123 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.00s 2026-02-19 04:10:58.725131 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.35s 2026-02-19 04:10:58.725138 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.80s 2026-02-19 04:10:58.725146 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.42s 2026-02-19 04:10:58.725154 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.33s 2026-02-19 04:10:58.725162 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.63s 2026-02-19 04:10:58.725169 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.39s 2026-02-19 04:11:01.026186 | orchestrator | 2026-02-19 04:11:01 | INFO  | Task 6d704b25-727f-4e54-ab56-f0d3e7cd9f43 (horizon) was prepared for execution. 2026-02-19 04:11:01.026270 | orchestrator | 2026-02-19 04:11:01 | INFO  | It takes a moment until task 6d704b25-727f-4e54-ab56-f0d3e7cd9f43 (horizon) has been started and output is visible here. 2026-02-19 04:11:08.488929 | orchestrator | 2026-02-19 04:11:08.489041 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:11:08.489057 | orchestrator | 2026-02-19 04:11:08.489068 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:11:08.489081 | orchestrator | Thursday 19 February 2026 04:11:05 +0000 (0:00:00.259) 0:00:00.259 ***** 2026-02-19 04:11:08.489092 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:08.489104 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:08.489114 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:08.489126 | orchestrator | 2026-02-19 04:11:08.489137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:11:08.489148 | orchestrator | Thursday 19 February 2026 04:11:05 +0000 (0:00:00.317) 0:00:00.576 ***** 2026-02-19 04:11:08.489159 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-19 04:11:08.489171 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-19 04:11:08.489182 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-19 04:11:08.489195 | orchestrator | 2026-02-19 04:11:08.489206 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-19 04:11:08.489218 | orchestrator | 2026-02-19 04:11:08.489230 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-19 04:11:08.489241 | orchestrator | Thursday 19 February 2026 04:11:06 +0000 (0:00:00.477) 0:00:01.053 ***** 2026-02-19 04:11:08.489254 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:11:08.489267 | orchestrator | 2026-02-19 04:11:08.489278 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-19 04:11:08.489315 | orchestrator | Thursday 19 February 2026 04:11:06 +0000 (0:00:00.532) 0:00:01.586 ***** 2026-02-19 04:11:08.489349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:11:08.489387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:11:08.489416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:11:08.489430 | orchestrator | 2026-02-19 04:11:08.489441 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-19 04:11:08.489451 | orchestrator | Thursday 19 February 2026 04:11:07 +0000 (0:00:01.256) 0:00:02.843 ***** 2026-02-19 04:11:08.489463 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:08.489476 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:08.489487 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:08.489499 | orchestrator | 2026-02-19 04:11:08.489511 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-19 04:11:08.489523 | orchestrator | Thursday 19 February 2026 04:11:08 +0000 (0:00:00.540) 0:00:03.383 ***** 2026-02-19 04:11:08.489542 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-19 04:11:14.815263 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-19 04:11:14.815406 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-19 04:11:14.815433 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-19 04:11:14.815454 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-19 04:11:14.815473 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-19 04:11:14.815490 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-19 04:11:14.815534 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-19 04:11:14.815552 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-19 04:11:14.815571 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-19 04:11:14.815589 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-19 04:11:14.815609 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-19 04:11:14.815629 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-19 04:11:14.815648 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-19 04:11:14.815668 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-19 04:11:14.815687 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-19 04:11:14.815735 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-19 04:11:14.815747 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-19 04:11:14.815757 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-19 04:11:14.815768 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-19 04:11:14.815779 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-19 04:11:14.815790 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-19 04:11:14.815803 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-19 04:11:14.815815 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-19 04:11:14.815829 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-19 04:11:14.815843 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-19 04:11:14.815856 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-19 04:11:14.815885 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-19 04:11:14.815898 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-19 04:11:14.815910 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-19 04:11:14.815922 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-19 04:11:14.815935 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-19 04:11:14.815949 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-19 04:11:14.815961 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-19 04:11:14.815972 | orchestrator | 2026-02-19 04:11:14.815985 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:14.816007 | orchestrator | Thursday 19 February 2026 04:11:09 +0000 (0:00:00.777) 0:00:04.161 ***** 2026-02-19 04:11:14.816018 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:14.816031 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:14.816041 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:14.816052 | orchestrator | 2026-02-19 04:11:14.816063 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:14.816074 | orchestrator | Thursday 19 February 2026 04:11:09 +0000 (0:00:00.395) 0:00:04.556 ***** 2026-02-19 04:11:14.816085 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.816098 | orchestrator | 2026-02-19 04:11:14.816128 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:14.816140 | orchestrator | Thursday 19 February 2026 04:11:09 +0000 (0:00:00.319) 0:00:04.876 ***** 2026-02-19 04:11:14.816151 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.816162 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:14.816174 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:14.816192 | orchestrator | 2026-02-19 04:11:14.816211 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:14.816229 | orchestrator | Thursday 19 February 2026 04:11:10 +0000 (0:00:00.328) 0:00:05.204 ***** 2026-02-19 04:11:14.816247 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:14.816267 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:14.816286 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:14.816305 | orchestrator | 2026-02-19 04:11:14.816317 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:14.816328 | orchestrator | Thursday 19 February 2026 04:11:10 +0000 (0:00:00.381) 0:00:05.585 ***** 2026-02-19 04:11:14.816339 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.816350 | orchestrator | 2026-02-19 04:11:14.816360 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:14.816371 | orchestrator | Thursday 19 February 2026 04:11:10 +0000 (0:00:00.136) 0:00:05.722 ***** 2026-02-19 04:11:14.816382 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.816394 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:14.816405 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:14.816415 | orchestrator | 2026-02-19 04:11:14.816426 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:14.816438 | orchestrator | Thursday 19 February 2026 04:11:10 +0000 (0:00:00.308) 0:00:06.030 ***** 2026-02-19 04:11:14.816457 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:14.816475 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:14.816492 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:14.816510 | orchestrator | 2026-02-19 04:11:14.816528 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:14.816546 | orchestrator | Thursday 19 February 2026 04:11:11 +0000 (0:00:00.506) 0:00:06.537 ***** 2026-02-19 04:11:14.816563 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.816581 | orchestrator | 2026-02-19 04:11:14.816597 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:14.816614 | orchestrator | Thursday 19 February 2026 04:11:11 +0000 (0:00:00.136) 0:00:06.673 ***** 2026-02-19 04:11:14.816631 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.816650 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:14.816667 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:14.816683 | orchestrator | 2026-02-19 04:11:14.816799 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:14.816822 | orchestrator | Thursday 19 February 2026 04:11:11 +0000 (0:00:00.327) 0:00:07.001 ***** 2026-02-19 04:11:14.816842 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:14.816861 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:14.816880 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:14.816896 | orchestrator | 2026-02-19 04:11:14.816912 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:14.816930 | orchestrator | Thursday 19 February 2026 04:11:12 +0000 (0:00:00.347) 0:00:07.349 ***** 2026-02-19 04:11:14.816961 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.816977 | orchestrator | 2026-02-19 04:11:14.816993 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:14.817011 | orchestrator | Thursday 19 February 2026 04:11:12 +0000 (0:00:00.143) 0:00:07.492 ***** 2026-02-19 04:11:14.817029 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.817049 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:14.817067 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:14.817085 | orchestrator | 2026-02-19 04:11:14.817103 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:14.817134 | orchestrator | Thursday 19 February 2026 04:11:12 +0000 (0:00:00.516) 0:00:08.009 ***** 2026-02-19 04:11:14.817153 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:14.817169 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:14.817184 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:14.817200 | orchestrator | 2026-02-19 04:11:14.817217 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:14.817235 | orchestrator | Thursday 19 February 2026 04:11:13 +0000 (0:00:00.387) 0:00:08.396 ***** 2026-02-19 04:11:14.817254 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.817273 | orchestrator | 2026-02-19 04:11:14.817290 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:14.817305 | orchestrator | Thursday 19 February 2026 04:11:13 +0000 (0:00:00.135) 0:00:08.532 ***** 2026-02-19 04:11:14.817324 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.817341 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:14.817360 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:14.817378 | orchestrator | 2026-02-19 04:11:14.817397 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:14.817414 | orchestrator | Thursday 19 February 2026 04:11:13 +0000 (0:00:00.322) 0:00:08.855 ***** 2026-02-19 04:11:14.817431 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:14.817446 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:14.817463 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:14.817480 | orchestrator | 2026-02-19 04:11:14.817497 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:14.817515 | orchestrator | Thursday 19 February 2026 04:11:14 +0000 (0:00:00.317) 0:00:09.173 ***** 2026-02-19 04:11:14.817532 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.817551 | orchestrator | 2026-02-19 04:11:14.817569 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:14.817588 | orchestrator | Thursday 19 February 2026 04:11:14 +0000 (0:00:00.351) 0:00:09.524 ***** 2026-02-19 04:11:14.817606 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:14.817624 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:14.817641 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:14.817660 | orchestrator | 2026-02-19 04:11:14.817678 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:14.817793 | orchestrator | Thursday 19 February 2026 04:11:14 +0000 (0:00:00.316) 0:00:09.841 ***** 2026-02-19 04:11:29.088200 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:29.088280 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:29.088287 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:29.088293 | orchestrator | 2026-02-19 04:11:29.088300 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:29.088306 | orchestrator | Thursday 19 February 2026 04:11:15 +0000 (0:00:00.325) 0:00:10.167 ***** 2026-02-19 04:11:29.088312 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088318 | orchestrator | 2026-02-19 04:11:29.088324 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:29.088329 | orchestrator | Thursday 19 February 2026 04:11:15 +0000 (0:00:00.133) 0:00:10.300 ***** 2026-02-19 04:11:29.088334 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088340 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:29.088359 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:29.088364 | orchestrator | 2026-02-19 04:11:29.088370 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:29.088375 | orchestrator | Thursday 19 February 2026 04:11:15 +0000 (0:00:00.320) 0:00:10.621 ***** 2026-02-19 04:11:29.088381 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:29.088386 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:29.088393 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:29.088401 | orchestrator | 2026-02-19 04:11:29.088409 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:29.088416 | orchestrator | Thursday 19 February 2026 04:11:16 +0000 (0:00:00.547) 0:00:11.169 ***** 2026-02-19 04:11:29.088423 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088431 | orchestrator | 2026-02-19 04:11:29.088438 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:29.088445 | orchestrator | Thursday 19 February 2026 04:11:16 +0000 (0:00:00.139) 0:00:11.308 ***** 2026-02-19 04:11:29.088453 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088461 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:29.088468 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:29.088476 | orchestrator | 2026-02-19 04:11:29.088484 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:29.088492 | orchestrator | Thursday 19 February 2026 04:11:16 +0000 (0:00:00.315) 0:00:11.624 ***** 2026-02-19 04:11:29.088499 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:29.088507 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:29.088515 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:29.088523 | orchestrator | 2026-02-19 04:11:29.088529 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:29.088534 | orchestrator | Thursday 19 February 2026 04:11:16 +0000 (0:00:00.320) 0:00:11.944 ***** 2026-02-19 04:11:29.088539 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088544 | orchestrator | 2026-02-19 04:11:29.088549 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:29.088554 | orchestrator | Thursday 19 February 2026 04:11:17 +0000 (0:00:00.135) 0:00:12.080 ***** 2026-02-19 04:11:29.088559 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088564 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:29.088569 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:29.088573 | orchestrator | 2026-02-19 04:11:29.088578 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-19 04:11:29.088583 | orchestrator | Thursday 19 February 2026 04:11:17 +0000 (0:00:00.527) 0:00:12.608 ***** 2026-02-19 04:11:29.088588 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:11:29.088593 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:11:29.088597 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:11:29.088602 | orchestrator | 2026-02-19 04:11:29.088607 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-19 04:11:29.088612 | orchestrator | Thursday 19 February 2026 04:11:17 +0000 (0:00:00.323) 0:00:12.932 ***** 2026-02-19 04:11:29.088617 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088622 | orchestrator | 2026-02-19 04:11:29.088632 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-19 04:11:29.088638 | orchestrator | Thursday 19 February 2026 04:11:18 +0000 (0:00:00.127) 0:00:13.059 ***** 2026-02-19 04:11:29.088642 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088647 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:29.088652 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:29.088657 | orchestrator | 2026-02-19 04:11:29.088661 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-19 04:11:29.088666 | orchestrator | Thursday 19 February 2026 04:11:18 +0000 (0:00:00.330) 0:00:13.389 ***** 2026-02-19 04:11:29.088671 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:11:29.088718 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:11:29.088726 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:11:29.088736 | orchestrator | 2026-02-19 04:11:29.088741 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-19 04:11:29.088746 | orchestrator | Thursday 19 February 2026 04:11:20 +0000 (0:00:01.969) 0:00:15.359 ***** 2026-02-19 04:11:29.088751 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-19 04:11:29.088757 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-19 04:11:29.088762 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-19 04:11:29.088766 | orchestrator | 2026-02-19 04:11:29.088771 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-19 04:11:29.088776 | orchestrator | Thursday 19 February 2026 04:11:22 +0000 (0:00:01.919) 0:00:17.278 ***** 2026-02-19 04:11:29.088783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-19 04:11:29.088789 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-19 04:11:29.088795 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-19 04:11:29.088800 | orchestrator | 2026-02-19 04:11:29.088807 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-19 04:11:29.088823 | orchestrator | Thursday 19 February 2026 04:11:24 +0000 (0:00:01.832) 0:00:19.110 ***** 2026-02-19 04:11:29.088829 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-19 04:11:29.088835 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-19 04:11:29.088841 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-19 04:11:29.088846 | orchestrator | 2026-02-19 04:11:29.088852 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-19 04:11:29.088858 | orchestrator | Thursday 19 February 2026 04:11:25 +0000 (0:00:01.607) 0:00:20.717 ***** 2026-02-19 04:11:29.088863 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088869 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:29.088875 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:29.088880 | orchestrator | 2026-02-19 04:11:29.088886 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-19 04:11:29.088892 | orchestrator | Thursday 19 February 2026 04:11:26 +0000 (0:00:00.522) 0:00:21.240 ***** 2026-02-19 04:11:29.088897 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.088903 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:29.088909 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:29.088914 | orchestrator | 2026-02-19 04:11:29.088920 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-19 04:11:29.088925 | orchestrator | Thursday 19 February 2026 04:11:26 +0000 (0:00:00.310) 0:00:21.550 ***** 2026-02-19 04:11:29.088931 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:11:29.088937 | orchestrator | 2026-02-19 04:11:29.088942 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-19 04:11:29.088948 | orchestrator | Thursday 19 February 2026 04:11:27 +0000 (0:00:00.618) 0:00:22.168 ***** 2026-02-19 04:11:29.088961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:11:29.088979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:11:29.731299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:11:29.731422 | orchestrator | 2026-02-19 04:11:29.731435 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-19 04:11:29.731443 | orchestrator | Thursday 19 February 2026 04:11:29 +0000 (0:00:01.940) 0:00:24.108 ***** 2026-02-19 04:11:29.731466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 04:11:29.731480 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:29.731494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 04:11:29.731501 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:29.731512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 04:11:32.311509 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:32.311589 | orchestrator | 2026-02-19 04:11:32.311599 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-19 04:11:32.311608 | orchestrator | Thursday 19 February 2026 04:11:29 +0000 (0:00:00.649) 0:00:24.758 ***** 2026-02-19 04:11:32.311631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 04:11:32.311641 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:11:32.311662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 04:11:32.311759 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:11:32.311768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 04:11:32.311775 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:11:32.311781 | orchestrator | 2026-02-19 04:11:32.311788 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-19 04:11:32.311816 | orchestrator | Thursday 19 February 2026 04:11:30 +0000 (0:00:00.831) 0:00:25.590 ***** 2026-02-19 04:11:32.311834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:12:20.541825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:12:20.541952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 04:12:20.541961 | orchestrator | 2026-02-19 04:12:20.541967 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-19 04:12:20.541973 | orchestrator | Thursday 19 February 2026 04:11:32 +0000 (0:00:01.747) 0:00:27.337 ***** 2026-02-19 04:12:20.541977 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:12:20.541982 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:12:20.541986 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:12:20.541991 | orchestrator | 2026-02-19 04:12:20.541995 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-19 04:12:20.541999 | orchestrator | Thursday 19 February 2026 04:11:32 +0000 (0:00:00.362) 0:00:27.699 ***** 2026-02-19 04:12:20.542004 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:12:20.542008 | orchestrator | 2026-02-19 04:12:20.542050 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-19 04:12:20.542055 | orchestrator | Thursday 19 February 2026 04:11:33 +0000 (0:00:00.554) 0:00:28.254 ***** 2026-02-19 04:12:20.542061 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:12:20.542068 | orchestrator | 2026-02-19 04:12:20.542074 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-19 04:12:20.542081 | orchestrator | Thursday 19 February 2026 04:11:35 +0000 (0:00:02.355) 0:00:30.609 ***** 2026-02-19 04:12:20.542088 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:12:20.542094 | orchestrator | 2026-02-19 04:12:20.542100 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-19 04:12:20.542106 | orchestrator | Thursday 19 February 2026 04:11:38 +0000 (0:00:02.856) 0:00:33.466 ***** 2026-02-19 04:12:20.542119 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:12:20.542126 | orchestrator | 2026-02-19 04:12:20.542132 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-19 04:12:20.542137 | orchestrator | Thursday 19 February 2026 04:11:55 +0000 (0:00:17.500) 0:00:50.966 ***** 2026-02-19 04:12:20.542143 | orchestrator | 2026-02-19 04:12:20.542148 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-19 04:12:20.542154 | orchestrator | Thursday 19 February 2026 04:11:56 +0000 (0:00:00.071) 0:00:51.038 ***** 2026-02-19 04:12:20.542160 | orchestrator | 2026-02-19 04:12:20.542167 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-19 04:12:20.542172 | orchestrator | Thursday 19 February 2026 04:11:56 +0000 (0:00:00.066) 0:00:51.105 ***** 2026-02-19 04:12:20.542179 | orchestrator | 2026-02-19 04:12:20.542184 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-19 04:12:20.542188 | orchestrator | Thursday 19 February 2026 04:11:56 +0000 (0:00:00.072) 0:00:51.178 ***** 2026-02-19 04:12:20.542192 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:12:20.542195 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:12:20.542199 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:12:20.542203 | orchestrator | 2026-02-19 04:12:20.542207 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:12:20.542211 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-19 04:12:20.542217 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-19 04:12:20.542221 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-19 04:12:20.542224 | orchestrator | 2026-02-19 04:12:20.542228 | orchestrator | 2026-02-19 04:12:20.542232 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:12:20.542236 | orchestrator | Thursday 19 February 2026 04:12:20 +0000 (0:00:24.374) 0:01:15.552 ***** 2026-02-19 04:12:20.542239 | orchestrator | =============================================================================== 2026-02-19 04:12:20.542243 | orchestrator | horizon : Restart horizon container ------------------------------------ 24.37s 2026-02-19 04:12:20.542247 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.50s 2026-02-19 04:12:20.542250 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.86s 2026-02-19 04:12:20.542254 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.36s 2026-02-19 04:12:20.542262 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.97s 2026-02-19 04:12:20.542266 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.94s 2026-02-19 04:12:20.542270 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.92s 2026-02-19 04:12:20.542273 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.83s 2026-02-19 04:12:20.542277 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.75s 2026-02-19 04:12:20.542281 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.61s 2026-02-19 04:12:20.542284 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.26s 2026-02-19 04:12:20.542288 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2026-02-19 04:12:20.542292 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2026-02-19 04:12:20.542300 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2026-02-19 04:12:20.951868 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-02-19 04:12:20.951969 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-02-19 04:12:20.952013 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-02-19 04:12:20.952023 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.54s 2026-02-19 04:12:20.952033 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-02-19 04:12:20.952043 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-02-19 04:12:23.306147 | orchestrator | 2026-02-19 04:12:23 | INFO  | Task 5808e73e-5428-4f36-8379-a5aa506c21a4 (skyline) was prepared for execution. 2026-02-19 04:12:23.306235 | orchestrator | 2026-02-19 04:12:23 | INFO  | It takes a moment until task 5808e73e-5428-4f36-8379-a5aa506c21a4 (skyline) has been started and output is visible here. 2026-02-19 04:12:55.246566 | orchestrator | 2026-02-19 04:12:55.246645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:12:55.246653 | orchestrator | 2026-02-19 04:12:55.246658 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:12:55.246664 | orchestrator | Thursday 19 February 2026 04:12:27 +0000 (0:00:00.258) 0:00:00.258 ***** 2026-02-19 04:12:55.246668 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:12:55.246674 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:12:55.246679 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:12:55.246684 | orchestrator | 2026-02-19 04:12:55.246688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:12:55.246693 | orchestrator | Thursday 19 February 2026 04:12:27 +0000 (0:00:00.297) 0:00:00.555 ***** 2026-02-19 04:12:55.246698 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-19 04:12:55.246703 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-19 04:12:55.246707 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-19 04:12:55.246712 | orchestrator | 2026-02-19 04:12:55.246717 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-19 04:12:55.246721 | orchestrator | 2026-02-19 04:12:55.246726 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-19 04:12:55.246730 | orchestrator | Thursday 19 February 2026 04:12:28 +0000 (0:00:00.451) 0:00:01.006 ***** 2026-02-19 04:12:55.246736 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:12:55.246741 | orchestrator | 2026-02-19 04:12:55.246746 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-19 04:12:55.246750 | orchestrator | Thursday 19 February 2026 04:12:28 +0000 (0:00:00.533) 0:00:01.540 ***** 2026-02-19 04:12:55.246755 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-19 04:12:55.246759 | orchestrator | 2026-02-19 04:12:55.246764 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-19 04:12:55.246768 | orchestrator | Thursday 19 February 2026 04:12:32 +0000 (0:00:03.513) 0:00:05.054 ***** 2026-02-19 04:12:55.246773 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-19 04:12:55.246778 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-19 04:12:55.246783 | orchestrator | 2026-02-19 04:12:55.246787 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-19 04:12:55.246792 | orchestrator | Thursday 19 February 2026 04:12:39 +0000 (0:00:06.841) 0:00:11.896 ***** 2026-02-19 04:12:55.246796 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:12:55.246802 | orchestrator | 2026-02-19 04:12:55.246807 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-19 04:12:55.246812 | orchestrator | Thursday 19 February 2026 04:12:42 +0000 (0:00:03.271) 0:00:15.167 ***** 2026-02-19 04:12:55.246817 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:12:55.246821 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-19 04:12:55.246846 | orchestrator | 2026-02-19 04:12:55.246851 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-19 04:12:55.246856 | orchestrator | Thursday 19 February 2026 04:12:46 +0000 (0:00:04.185) 0:00:19.353 ***** 2026-02-19 04:12:55.246860 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:12:55.246865 | orchestrator | 2026-02-19 04:12:55.246869 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-19 04:12:55.246884 | orchestrator | Thursday 19 February 2026 04:12:50 +0000 (0:00:03.403) 0:00:22.757 ***** 2026-02-19 04:12:55.246889 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-19 04:12:55.246894 | orchestrator | 2026-02-19 04:12:55.246898 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-19 04:12:55.246903 | orchestrator | Thursday 19 February 2026 04:12:53 +0000 (0:00:03.882) 0:00:26.639 ***** 2026-02-19 04:12:55.246910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:55.246928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:55.246934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:55.246939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:55.246953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:55.246963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:58.867194 | orchestrator | 2026-02-19 04:12:58.867283 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-19 04:12:58.867296 | orchestrator | Thursday 19 February 2026 04:12:55 +0000 (0:00:01.301) 0:00:27.941 ***** 2026-02-19 04:12:58.867304 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:12:58.867312 | orchestrator | 2026-02-19 04:12:58.867320 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-19 04:12:58.867328 | orchestrator | Thursday 19 February 2026 04:12:55 +0000 (0:00:00.668) 0:00:28.609 ***** 2026-02-19 04:12:58.867337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:58.867381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:58.867389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:58.867412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:58.867422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:58.867435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:12:58.867443 | orchestrator | 2026-02-19 04:12:58.867450 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-19 04:12:58.867458 | orchestrator | Thursday 19 February 2026 04:12:58 +0000 (0:00:02.380) 0:00:30.990 ***** 2026-02-19 04:12:58.867469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 04:12:58.867477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 04:12:58.867485 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:12:58.867547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.953802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.953916 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:12:59.953957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.953972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.953981 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:12:59.953989 | orchestrator | 2026-02-19 04:12:59.953998 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-19 04:12:59.954007 | orchestrator | Thursday 19 February 2026 04:12:58 +0000 (0:00:00.576) 0:00:31.567 ***** 2026-02-19 04:12:59.954066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.954112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.954121 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:12:59.954134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.954142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.954149 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:12:59.954157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-19 04:12:59.954176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-19 04:13:08.650275 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:13:08.650402 | orchestrator | 2026-02-19 04:13:08.650420 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-19 04:13:08.650434 | orchestrator | Thursday 19 February 2026 04:12:59 +0000 (0:00:01.079) 0:00:32.646 ***** 2026-02-19 04:13:08.650465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:08.650542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:08.650559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:08.650592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:08.650647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:08.650669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:08.650686 | orchestrator | 2026-02-19 04:13:08.650703 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-19 04:13:08.650721 | orchestrator | Thursday 19 February 2026 04:13:02 +0000 (0:00:02.513) 0:00:35.160 ***** 2026-02-19 04:13:08.650739 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-19 04:13:08.650758 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-19 04:13:08.650776 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-19 04:13:08.650794 | orchestrator | 2026-02-19 04:13:08.650813 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-19 04:13:08.650831 | orchestrator | Thursday 19 February 2026 04:13:04 +0000 (0:00:01.593) 0:00:36.753 ***** 2026-02-19 04:13:08.650851 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-19 04:13:08.650885 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-19 04:13:08.650907 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-19 04:13:08.650921 | orchestrator | 2026-02-19 04:13:08.650934 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-19 04:13:08.650947 | orchestrator | Thursday 19 February 2026 04:13:06 +0000 (0:00:02.159) 0:00:38.913 ***** 2026-02-19 04:13:08.650961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:08.650989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831238 | orchestrator | 2026-02-19 04:13:10.831248 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-19 04:13:10.831258 | orchestrator | Thursday 19 February 2026 04:13:08 +0000 (0:00:02.433) 0:00:41.346 ***** 2026-02-19 04:13:10.831266 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:13:10.831274 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:13:10.831282 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:13:10.831289 | orchestrator | 2026-02-19 04:13:10.831308 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-19 04:13:10.831315 | orchestrator | Thursday 19 February 2026 04:13:08 +0000 (0:00:00.345) 0:00:41.692 ***** 2026-02-19 04:13:10.831327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:10.831379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:48.512510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-19 04:13:48.512671 | orchestrator | 2026-02-19 04:13:48.512703 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-19 04:13:48.512722 | orchestrator | Thursday 19 February 2026 04:13:10 +0000 (0:00:01.831) 0:00:43.523 ***** 2026-02-19 04:13:48.512739 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:13:48.512750 | orchestrator | 2026-02-19 04:13:48.512760 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-19 04:13:48.512770 | orchestrator | Thursday 19 February 2026 04:13:13 +0000 (0:00:02.436) 0:00:45.959 ***** 2026-02-19 04:13:48.512783 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:13:48.512805 | orchestrator | 2026-02-19 04:13:48.512824 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-19 04:13:48.512839 | orchestrator | Thursday 19 February 2026 04:13:15 +0000 (0:00:02.429) 0:00:48.388 ***** 2026-02-19 04:13:48.512855 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:13:48.512870 | orchestrator | 2026-02-19 04:13:48.512885 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-19 04:13:48.512901 | orchestrator | Thursday 19 February 2026 04:13:23 +0000 (0:00:07.508) 0:00:55.897 ***** 2026-02-19 04:13:48.512916 | orchestrator | 2026-02-19 04:13:48.512932 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-19 04:13:48.512947 | orchestrator | Thursday 19 February 2026 04:13:23 +0000 (0:00:00.068) 0:00:55.966 ***** 2026-02-19 04:13:48.512962 | orchestrator | 2026-02-19 04:13:48.512978 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-19 04:13:48.512995 | orchestrator | Thursday 19 February 2026 04:13:23 +0000 (0:00:00.069) 0:00:56.035 ***** 2026-02-19 04:13:48.513013 | orchestrator | 2026-02-19 04:13:48.513029 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-19 04:13:48.513046 | orchestrator | Thursday 19 February 2026 04:13:23 +0000 (0:00:00.070) 0:00:56.105 ***** 2026-02-19 04:13:48.513062 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:13:48.513116 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:13:48.513152 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:13:48.513179 | orchestrator | 2026-02-19 04:13:48.513194 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-19 04:13:48.513210 | orchestrator | Thursday 19 February 2026 04:13:34 +0000 (0:00:10.918) 0:01:07.023 ***** 2026-02-19 04:13:48.513225 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:13:48.513242 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:13:48.513256 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:13:48.513270 | orchestrator | 2026-02-19 04:13:48.513284 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:13:48.513301 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 04:13:48.513318 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 04:13:48.513334 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 04:13:48.513349 | orchestrator | 2026-02-19 04:13:48.513364 | orchestrator | 2026-02-19 04:13:48.513379 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:13:48.513417 | orchestrator | Thursday 19 February 2026 04:13:48 +0000 (0:00:13.851) 0:01:20.875 ***** 2026-02-19 04:13:48.513509 | orchestrator | =============================================================================== 2026-02-19 04:13:48.513529 | orchestrator | skyline : Restart skyline-console container ---------------------------- 13.85s 2026-02-19 04:13:48.513547 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 10.92s 2026-02-19 04:13:48.513582 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.51s 2026-02-19 04:13:48.513606 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.84s 2026-02-19 04:13:48.513625 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.19s 2026-02-19 04:13:48.513640 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.88s 2026-02-19 04:13:48.513656 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.51s 2026-02-19 04:13:48.513673 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.40s 2026-02-19 04:13:48.513716 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.27s 2026-02-19 04:13:48.513727 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.51s 2026-02-19 04:13:48.513736 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.44s 2026-02-19 04:13:48.513746 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.43s 2026-02-19 04:13:48.513755 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.43s 2026-02-19 04:13:48.513765 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.38s 2026-02-19 04:13:48.513781 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.16s 2026-02-19 04:13:48.513797 | orchestrator | skyline : Check skyline container --------------------------------------- 1.83s 2026-02-19 04:13:48.513813 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.59s 2026-02-19 04:13:48.513829 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.30s 2026-02-19 04:13:48.513844 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.08s 2026-02-19 04:13:48.513859 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.67s 2026-02-19 04:13:50.826728 | orchestrator | 2026-02-19 04:13:50 | INFO  | Task f6b80025-cb8c-4427-ad63-ec13e388c728 (glance) was prepared for execution. 2026-02-19 04:13:50.826848 | orchestrator | 2026-02-19 04:13:50 | INFO  | It takes a moment until task f6b80025-cb8c-4427-ad63-ec13e388c728 (glance) has been started and output is visible here. 2026-02-19 04:14:26.355303 | orchestrator | 2026-02-19 04:14:26.355455 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:14:26.355469 | orchestrator | 2026-02-19 04:14:26.355475 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:14:26.355480 | orchestrator | Thursday 19 February 2026 04:13:55 +0000 (0:00:00.274) 0:00:00.274 ***** 2026-02-19 04:14:26.355486 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:14:26.355493 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:14:26.355498 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:14:26.355503 | orchestrator | 2026-02-19 04:14:26.355508 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:14:26.355514 | orchestrator | Thursday 19 February 2026 04:13:55 +0000 (0:00:00.332) 0:00:00.607 ***** 2026-02-19 04:14:26.355519 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-19 04:14:26.355524 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-19 04:14:26.355529 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-19 04:14:26.355534 | orchestrator | 2026-02-19 04:14:26.355539 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-19 04:14:26.355545 | orchestrator | 2026-02-19 04:14:26.355550 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-19 04:14:26.355573 | orchestrator | Thursday 19 February 2026 04:13:55 +0000 (0:00:00.479) 0:00:01.087 ***** 2026-02-19 04:14:26.355579 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:14:26.355585 | orchestrator | 2026-02-19 04:14:26.355590 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-19 04:14:26.355595 | orchestrator | Thursday 19 February 2026 04:13:56 +0000 (0:00:00.542) 0:00:01.629 ***** 2026-02-19 04:14:26.355600 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-19 04:14:26.355605 | orchestrator | 2026-02-19 04:14:26.355610 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-19 04:14:26.355615 | orchestrator | Thursday 19 February 2026 04:14:00 +0000 (0:00:03.538) 0:00:05.168 ***** 2026-02-19 04:14:26.355620 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-19 04:14:26.355626 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-19 04:14:26.355631 | orchestrator | 2026-02-19 04:14:26.355636 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-19 04:14:26.355641 | orchestrator | Thursday 19 February 2026 04:14:07 +0000 (0:00:06.999) 0:00:12.167 ***** 2026-02-19 04:14:26.355647 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:14:26.355653 | orchestrator | 2026-02-19 04:14:26.355658 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-19 04:14:26.355663 | orchestrator | Thursday 19 February 2026 04:14:10 +0000 (0:00:03.352) 0:00:15.520 ***** 2026-02-19 04:14:26.355668 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:14:26.355674 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-19 04:14:26.355679 | orchestrator | 2026-02-19 04:14:26.355684 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-19 04:14:26.355689 | orchestrator | Thursday 19 February 2026 04:14:14 +0000 (0:00:04.294) 0:00:19.815 ***** 2026-02-19 04:14:26.355694 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:14:26.355699 | orchestrator | 2026-02-19 04:14:26.355716 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-19 04:14:26.355721 | orchestrator | Thursday 19 February 2026 04:14:17 +0000 (0:00:03.321) 0:00:23.136 ***** 2026-02-19 04:14:26.355726 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-19 04:14:26.355731 | orchestrator | 2026-02-19 04:14:26.355736 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-19 04:14:26.355741 | orchestrator | Thursday 19 February 2026 04:14:21 +0000 (0:00:03.950) 0:00:27.087 ***** 2026-02-19 04:14:26.355765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:14:26.355777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:14:26.355787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:14:26.355793 | orchestrator | 2026-02-19 04:14:26.355799 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-19 04:14:26.355804 | orchestrator | Thursday 19 February 2026 04:14:25 +0000 (0:00:03.630) 0:00:30.718 ***** 2026-02-19 04:14:26.355813 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:14:26.355819 | orchestrator | 2026-02-19 04:14:26.355827 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-19 04:14:42.201308 | orchestrator | Thursday 19 February 2026 04:14:26 +0000 (0:00:00.768) 0:00:31.486 ***** 2026-02-19 04:14:42.201438 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:14:42.201458 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:14:42.201472 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:14:42.201485 | orchestrator | 2026-02-19 04:14:42.201499 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-19 04:14:42.201513 | orchestrator | Thursday 19 February 2026 04:14:30 +0000 (0:00:03.707) 0:00:35.194 ***** 2026-02-19 04:14:42.201527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:14:42.201542 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:14:42.201555 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:14:42.201569 | orchestrator | 2026-02-19 04:14:42.201583 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-19 04:14:42.201596 | orchestrator | Thursday 19 February 2026 04:14:31 +0000 (0:00:01.601) 0:00:36.795 ***** 2026-02-19 04:14:42.201610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:14:42.201623 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:14:42.201636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:14:42.201650 | orchestrator | 2026-02-19 04:14:42.201663 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-19 04:14:42.201677 | orchestrator | Thursday 19 February 2026 04:14:33 +0000 (0:00:01.382) 0:00:38.177 ***** 2026-02-19 04:14:42.201704 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:14:42.201727 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:14:42.201740 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:14:42.201751 | orchestrator | 2026-02-19 04:14:42.201762 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-19 04:14:42.201773 | orchestrator | Thursday 19 February 2026 04:14:33 +0000 (0:00:00.697) 0:00:38.875 ***** 2026-02-19 04:14:42.201785 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:14:42.201797 | orchestrator | 2026-02-19 04:14:42.201808 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-19 04:14:42.201821 | orchestrator | Thursday 19 February 2026 04:14:33 +0000 (0:00:00.145) 0:00:39.021 ***** 2026-02-19 04:14:42.201832 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:14:42.201843 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:14:42.201856 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:14:42.201867 | orchestrator | 2026-02-19 04:14:42.201879 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-19 04:14:42.201892 | orchestrator | Thursday 19 February 2026 04:14:34 +0000 (0:00:00.299) 0:00:39.320 ***** 2026-02-19 04:14:42.201907 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:14:42.201920 | orchestrator | 2026-02-19 04:14:42.201935 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-19 04:14:42.201949 | orchestrator | Thursday 19 February 2026 04:14:34 +0000 (0:00:00.724) 0:00:40.045 ***** 2026-02-19 04:14:42.201987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:14:42.202112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:14:42.202136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:14:42.202158 | orchestrator | 2026-02-19 04:14:42.202172 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-19 04:14:42.202185 | orchestrator | Thursday 19 February 2026 04:14:38 +0000 (0:00:03.871) 0:00:43.917 ***** 2026-02-19 04:14:42.202207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 04:14:45.741477 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:14:45.741612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 04:14:45.741694 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:14:45.741714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 04:14:45.741729 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:14:45.741744 | orchestrator | 2026-02-19 04:14:45.741760 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-19 04:14:45.741776 | orchestrator | Thursday 19 February 2026 04:14:42 +0000 (0:00:03.416) 0:00:47.333 ***** 2026-02-19 04:14:45.741822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 04:14:45.741852 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:14:45.741868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 04:14:45.741885 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:14:45.741913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 04:15:20.549909 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:15:20.550009 | orchestrator | 2026-02-19 04:15:20.550085 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-19 04:15:20.550118 | orchestrator | Thursday 19 February 2026 04:14:45 +0000 (0:00:03.539) 0:00:50.872 ***** 2026-02-19 04:15:20.550128 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:15:20.550137 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:15:20.550146 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:15:20.550155 | orchestrator | 2026-02-19 04:15:20.550163 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-19 04:15:20.550172 | orchestrator | Thursday 19 February 2026 04:14:48 +0000 (0:00:03.201) 0:00:54.074 ***** 2026-02-19 04:15:20.550198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:15:20.550211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:15:20.550244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:15:20.550262 | orchestrator | 2026-02-19 04:15:20.550271 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-19 04:15:20.550280 | orchestrator | Thursday 19 February 2026 04:14:52 +0000 (0:00:03.978) 0:00:58.053 ***** 2026-02-19 04:15:20.550289 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:15:20.550297 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:15:20.550306 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:15:20.550314 | orchestrator | 2026-02-19 04:15:20.550323 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-19 04:15:20.550361 | orchestrator | Thursday 19 February 2026 04:14:58 +0000 (0:00:05.896) 0:01:03.949 ***** 2026-02-19 04:15:20.550370 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:15:20.550379 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:15:20.550387 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:15:20.550396 | orchestrator | 2026-02-19 04:15:20.550404 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-19 04:15:20.550413 | orchestrator | Thursday 19 February 2026 04:15:02 +0000 (0:00:03.425) 0:01:07.375 ***** 2026-02-19 04:15:20.550421 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:15:20.550429 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:15:20.550438 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:15:20.550446 | orchestrator | 2026-02-19 04:15:20.550456 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-19 04:15:20.550466 | orchestrator | Thursday 19 February 2026 04:15:05 +0000 (0:00:03.336) 0:01:10.712 ***** 2026-02-19 04:15:20.550475 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:15:20.550485 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:15:20.550495 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:15:20.550505 | orchestrator | 2026-02-19 04:15:20.550514 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-19 04:15:20.550524 | orchestrator | Thursday 19 February 2026 04:15:08 +0000 (0:00:03.353) 0:01:14.066 ***** 2026-02-19 04:15:20.550533 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:15:20.550543 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:15:20.550553 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:15:20.550562 | orchestrator | 2026-02-19 04:15:20.550572 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-19 04:15:20.550581 | orchestrator | Thursday 19 February 2026 04:15:12 +0000 (0:00:03.543) 0:01:17.610 ***** 2026-02-19 04:15:20.550597 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:15:20.550606 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:15:20.550616 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:15:20.550626 | orchestrator | 2026-02-19 04:15:20.550635 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-19 04:15:20.550645 | orchestrator | Thursday 19 February 2026 04:15:13 +0000 (0:00:00.557) 0:01:18.168 ***** 2026-02-19 04:15:20.550655 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-19 04:15:20.550667 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:15:20.550681 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-19 04:15:20.550696 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:15:20.550711 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-19 04:15:20.550725 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:15:20.550740 | orchestrator | 2026-02-19 04:15:20.550754 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-19 04:15:20.550768 | orchestrator | Thursday 19 February 2026 04:15:16 +0000 (0:00:03.293) 0:01:21.461 ***** 2026-02-19 04:15:20.550781 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:15:20.550793 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:15:20.550806 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:15:20.550820 | orchestrator | 2026-02-19 04:15:20.550834 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-19 04:15:20.550858 | orchestrator | Thursday 19 February 2026 04:15:20 +0000 (0:00:04.219) 0:01:25.681 ***** 2026-02-19 04:16:32.322819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:16:32.322947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:16:32.323027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 04:16:32.323042 | orchestrator | 2026-02-19 04:16:32.323053 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-19 04:16:32.323063 | orchestrator | Thursday 19 February 2026 04:15:24 +0000 (0:00:03.717) 0:01:29.399 ***** 2026-02-19 04:16:32.323072 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:16:32.323082 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:16:32.323090 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:16:32.323099 | orchestrator | 2026-02-19 04:16:32.323108 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-19 04:16:32.323116 | orchestrator | Thursday 19 February 2026 04:15:24 +0000 (0:00:00.663) 0:01:30.062 ***** 2026-02-19 04:16:32.323125 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:16:32.323133 | orchestrator | 2026-02-19 04:16:32.323144 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-19 04:16:32.323159 | orchestrator | Thursday 19 February 2026 04:15:27 +0000 (0:00:02.254) 0:01:32.317 ***** 2026-02-19 04:16:32.323183 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:16:32.323198 | orchestrator | 2026-02-19 04:16:32.323212 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-19 04:16:32.323240 | orchestrator | Thursday 19 February 2026 04:15:29 +0000 (0:00:02.402) 0:01:34.719 ***** 2026-02-19 04:16:32.323255 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:16:32.323299 | orchestrator | 2026-02-19 04:16:32.323314 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-19 04:16:32.323323 | orchestrator | Thursday 19 February 2026 04:15:31 +0000 (0:00:02.215) 0:01:36.935 ***** 2026-02-19 04:16:32.323331 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:16:32.323340 | orchestrator | 2026-02-19 04:16:32.323348 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-19 04:16:32.323357 | orchestrator | Thursday 19 February 2026 04:16:00 +0000 (0:00:28.344) 0:02:05.280 ***** 2026-02-19 04:16:32.323365 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:16:32.323374 | orchestrator | 2026-02-19 04:16:32.323383 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-19 04:16:32.323391 | orchestrator | Thursday 19 February 2026 04:16:02 +0000 (0:00:02.280) 0:02:07.560 ***** 2026-02-19 04:16:32.323400 | orchestrator | 2026-02-19 04:16:32.323408 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-19 04:16:32.323417 | orchestrator | Thursday 19 February 2026 04:16:02 +0000 (0:00:00.069) 0:02:07.630 ***** 2026-02-19 04:16:32.323425 | orchestrator | 2026-02-19 04:16:32.323434 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-19 04:16:32.323443 | orchestrator | Thursday 19 February 2026 04:16:02 +0000 (0:00:00.068) 0:02:07.698 ***** 2026-02-19 04:16:32.323451 | orchestrator | 2026-02-19 04:16:32.323460 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-19 04:16:32.323468 | orchestrator | Thursday 19 February 2026 04:16:02 +0000 (0:00:00.069) 0:02:07.767 ***** 2026-02-19 04:16:32.323477 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:16:32.323485 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:16:32.323494 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:16:32.323502 | orchestrator | 2026-02-19 04:16:32.323511 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:16:32.323521 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-19 04:16:32.323531 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-19 04:16:32.323540 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-19 04:16:32.323549 | orchestrator | 2026-02-19 04:16:32.323557 | orchestrator | 2026-02-19 04:16:32.323566 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:16:32.323574 | orchestrator | Thursday 19 February 2026 04:16:32 +0000 (0:00:29.678) 0:02:37.446 ***** 2026-02-19 04:16:32.323583 | orchestrator | =============================================================================== 2026-02-19 04:16:32.323592 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.68s 2026-02-19 04:16:32.323600 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.34s 2026-02-19 04:16:32.323609 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.00s 2026-02-19 04:16:32.323625 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.90s 2026-02-19 04:16:32.669523 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.29s 2026-02-19 04:16:32.669628 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.22s 2026-02-19 04:16:32.669644 | orchestrator | glance : Copying over config.json files for services -------------------- 3.98s 2026-02-19 04:16:32.669656 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.95s 2026-02-19 04:16:32.669667 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.87s 2026-02-19 04:16:32.669719 | orchestrator | glance : Check glance containers ---------------------------------------- 3.72s 2026-02-19 04:16:32.669731 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.71s 2026-02-19 04:16:32.669742 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.63s 2026-02-19 04:16:32.669752 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.54s 2026-02-19 04:16:32.669763 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.54s 2026-02-19 04:16:32.669774 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.54s 2026-02-19 04:16:32.669785 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.43s 2026-02-19 04:16:32.669795 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.42s 2026-02-19 04:16:32.669806 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.35s 2026-02-19 04:16:32.669817 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.35s 2026-02-19 04:16:32.669827 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.34s 2026-02-19 04:16:35.107712 | orchestrator | 2026-02-19 04:16:35 | INFO  | Task 4e090026-4752-4049-b449-e0a07b694ded (cinder) was prepared for execution. 2026-02-19 04:16:35.107818 | orchestrator | 2026-02-19 04:16:35 | INFO  | It takes a moment until task 4e090026-4752-4049-b449-e0a07b694ded (cinder) has been started and output is visible here. 2026-02-19 04:17:12.478545 | orchestrator | 2026-02-19 04:17:12.478615 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:17:12.478623 | orchestrator | 2026-02-19 04:17:12.478627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:17:12.478631 | orchestrator | Thursday 19 February 2026 04:16:39 +0000 (0:00:00.259) 0:00:00.259 ***** 2026-02-19 04:17:12.478636 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:17:12.478641 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:17:12.478645 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:17:12.478649 | orchestrator | 2026-02-19 04:17:12.478653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:17:12.478657 | orchestrator | Thursday 19 February 2026 04:16:39 +0000 (0:00:00.304) 0:00:00.564 ***** 2026-02-19 04:17:12.478660 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-19 04:17:12.478665 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-19 04:17:12.478669 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-19 04:17:12.478672 | orchestrator | 2026-02-19 04:17:12.478676 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-19 04:17:12.478680 | orchestrator | 2026-02-19 04:17:12.478684 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-19 04:17:12.478687 | orchestrator | Thursday 19 February 2026 04:16:40 +0000 (0:00:00.474) 0:00:01.038 ***** 2026-02-19 04:17:12.478691 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:17:12.478696 | orchestrator | 2026-02-19 04:17:12.478700 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-19 04:17:12.478703 | orchestrator | Thursday 19 February 2026 04:16:40 +0000 (0:00:00.607) 0:00:01.645 ***** 2026-02-19 04:17:12.478708 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-19 04:17:12.478712 | orchestrator | 2026-02-19 04:17:12.478716 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-19 04:17:12.478720 | orchestrator | Thursday 19 February 2026 04:16:44 +0000 (0:00:03.851) 0:00:05.497 ***** 2026-02-19 04:17:12.478724 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-19 04:17:12.478728 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-19 04:17:12.478748 | orchestrator | 2026-02-19 04:17:12.478752 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-19 04:17:12.478756 | orchestrator | Thursday 19 February 2026 04:16:51 +0000 (0:00:06.861) 0:00:12.358 ***** 2026-02-19 04:17:12.478760 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:17:12.478764 | orchestrator | 2026-02-19 04:17:12.478768 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-19 04:17:12.478771 | orchestrator | Thursday 19 February 2026 04:16:54 +0000 (0:00:03.407) 0:00:15.766 ***** 2026-02-19 04:17:12.478775 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:17:12.478779 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-19 04:17:12.478783 | orchestrator | 2026-02-19 04:17:12.478786 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-19 04:17:12.478790 | orchestrator | Thursday 19 February 2026 04:16:59 +0000 (0:00:04.312) 0:00:20.079 ***** 2026-02-19 04:17:12.478794 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:17:12.478798 | orchestrator | 2026-02-19 04:17:12.478801 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-19 04:17:12.478805 | orchestrator | Thursday 19 February 2026 04:17:02 +0000 (0:00:03.565) 0:00:23.644 ***** 2026-02-19 04:17:12.478809 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-19 04:17:12.478812 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-19 04:17:12.478816 | orchestrator | 2026-02-19 04:17:12.478820 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-19 04:17:12.478823 | orchestrator | Thursday 19 February 2026 04:17:10 +0000 (0:00:07.642) 0:00:31.286 ***** 2026-02-19 04:17:12.478839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:12.478856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:12.478861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:12.478870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:12.478876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:12.478882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:12.478887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:12.478896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:18.267959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:18.268119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:18.268140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:18.268169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:18.268181 | orchestrator | 2026-02-19 04:17:18.268195 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-19 04:17:18.268207 | orchestrator | Thursday 19 February 2026 04:17:12 +0000 (0:00:02.154) 0:00:33.441 ***** 2026-02-19 04:17:18.268278 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:17:18.268293 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:17:18.268303 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:17:18.268314 | orchestrator | 2026-02-19 04:17:18.268325 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-19 04:17:18.268336 | orchestrator | Thursday 19 February 2026 04:17:13 +0000 (0:00:00.506) 0:00:33.947 ***** 2026-02-19 04:17:18.268347 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:17:18.268364 | orchestrator | 2026-02-19 04:17:18.268391 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-19 04:17:18.268413 | orchestrator | Thursday 19 February 2026 04:17:13 +0000 (0:00:00.547) 0:00:34.495 ***** 2026-02-19 04:17:18.268497 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-19 04:17:18.268523 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-19 04:17:18.268560 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-19 04:17:18.268578 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-19 04:17:18.268591 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-19 04:17:18.268602 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-19 04:17:18.268612 | orchestrator | 2026-02-19 04:17:18.268623 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-19 04:17:18.268634 | orchestrator | Thursday 19 February 2026 04:17:15 +0000 (0:00:01.582) 0:00:36.078 ***** 2026-02-19 04:17:18.268708 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-19 04:17:18.268725 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-19 04:17:18.268746 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-19 04:17:18.268758 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-19 04:17:18.268779 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-19 04:17:29.032271 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-19 04:17:29.032383 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-19 04:17:29.032420 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-19 04:17:29.032434 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-19 04:17:29.032446 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-19 04:17:29.032501 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-19 04:17:29.032515 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-19 04:17:29.032527 | orchestrator | 2026-02-19 04:17:29.032541 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-19 04:17:29.032553 | orchestrator | Thursday 19 February 2026 04:17:18 +0000 (0:00:03.364) 0:00:39.442 ***** 2026-02-19 04:17:29.032564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:17:29.032576 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:17:29.032587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-19 04:17:29.032598 | orchestrator | 2026-02-19 04:17:29.032609 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-19 04:17:29.032620 | orchestrator | Thursday 19 February 2026 04:17:20 +0000 (0:00:01.521) 0:00:40.964 ***** 2026-02-19 04:17:29.032631 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-19 04:17:29.032642 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-19 04:17:29.032653 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-19 04:17:29.032669 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-19 04:17:29.032681 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-19 04:17:29.032692 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-19 04:17:29.032703 | orchestrator | 2026-02-19 04:17:29.032714 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-19 04:17:29.032724 | orchestrator | Thursday 19 February 2026 04:17:22 +0000 (0:00:02.686) 0:00:43.651 ***** 2026-02-19 04:17:29.032745 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-19 04:17:29.032757 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-19 04:17:29.032769 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-19 04:17:29.032782 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-19 04:17:29.032794 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-19 04:17:29.032806 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-19 04:17:29.032819 | orchestrator | 2026-02-19 04:17:29.032833 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-19 04:17:29.032845 | orchestrator | Thursday 19 February 2026 04:17:23 +0000 (0:00:01.053) 0:00:44.705 ***** 2026-02-19 04:17:29.032858 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:17:29.032872 | orchestrator | 2026-02-19 04:17:29.032885 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-19 04:17:29.032898 | orchestrator | Thursday 19 February 2026 04:17:23 +0000 (0:00:00.139) 0:00:44.844 ***** 2026-02-19 04:17:29.032911 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:17:29.032924 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:17:29.032937 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:17:29.032949 | orchestrator | 2026-02-19 04:17:29.032962 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-19 04:17:29.032975 | orchestrator | Thursday 19 February 2026 04:17:24 +0000 (0:00:00.503) 0:00:45.347 ***** 2026-02-19 04:17:29.032988 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:17:29.033002 | orchestrator | 2026-02-19 04:17:29.033014 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-19 04:17:29.033027 | orchestrator | Thursday 19 February 2026 04:17:25 +0000 (0:00:00.588) 0:00:45.935 ***** 2026-02-19 04:17:29.033050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:29.937118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:29.937284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:29.937325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:29.937428 | orchestrator | 2026-02-19 04:17:29.937437 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-19 04:17:29.937446 | orchestrator | Thursday 19 February 2026 04:17:29 +0000 (0:00:04.067) 0:00:50.003 ***** 2026-02-19 04:17:29.937461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:30.038417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038517 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:17:30.038524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:30.038529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038589 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:17:30.038593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:30.038598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.038614 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:17:30.038617 | orchestrator | 2026-02-19 04:17:30.038623 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-19 04:17:30.038632 | orchestrator | Thursday 19 February 2026 04:17:30 +0000 (0:00:00.915) 0:00:50.918 ***** 2026-02-19 04:17:30.633809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:30.633932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.633949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.633963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.633976 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:17:30.633990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:30.634131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.634159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.634171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.634182 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:17:30.634194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:30.634232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:30.634280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:35.117644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:35.117741 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:17:35.117754 | orchestrator | 2026-02-19 04:17:35.117762 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-19 04:17:35.117771 | orchestrator | Thursday 19 February 2026 04:17:30 +0000 (0:00:00.908) 0:00:51.826 ***** 2026-02-19 04:17:35.117780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:35.117788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:35.117795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:35.117835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:35.117847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:35.117852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:35.117856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:35.117861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:35.117868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:35.117876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:47.919333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:47.919442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:47.919459 | orchestrator | 2026-02-19 04:17:47.919474 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-19 04:17:47.919486 | orchestrator | Thursday 19 February 2026 04:17:35 +0000 (0:00:04.255) 0:00:56.082 ***** 2026-02-19 04:17:47.919498 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-19 04:17:47.919510 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-19 04:17:47.919521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-19 04:17:47.919531 | orchestrator | 2026-02-19 04:17:47.919542 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-19 04:17:47.919553 | orchestrator | Thursday 19 February 2026 04:17:37 +0000 (0:00:01.874) 0:00:57.957 ***** 2026-02-19 04:17:47.919566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:47.919607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:47.919645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:47.919658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:47.919670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:47.919682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:47.919701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:47.919713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:47.919739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:50.270979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:50.271081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:50.271123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:50.271138 | orchestrator | 2026-02-19 04:17:50.271152 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-19 04:17:50.271165 | orchestrator | Thursday 19 February 2026 04:17:47 +0000 (0:00:10.923) 0:01:08.881 ***** 2026-02-19 04:17:50.271177 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:17:50.271190 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:17:50.271232 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:17:50.271243 | orchestrator | 2026-02-19 04:17:50.271254 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-19 04:17:50.271265 | orchestrator | Thursday 19 February 2026 04:17:49 +0000 (0:00:01.496) 0:01:10.377 ***** 2026-02-19 04:17:50.271278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:50.271305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:50.271349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:50.271373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:50.271394 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:17:50.271406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:50.271418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:50.271429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:50.271454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:53.988312 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:17:53.988420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-19 04:17:53.988465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:17:53.988480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 04:17:53.988492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 04:17:53.988504 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:17:53.988516 | orchestrator | 2026-02-19 04:17:53.988528 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-19 04:17:53.988540 | orchestrator | Thursday 19 February 2026 04:17:50 +0000 (0:00:00.859) 0:01:11.237 ***** 2026-02-19 04:17:53.988551 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:17:53.988561 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:17:53.988572 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:17:53.988583 | orchestrator | 2026-02-19 04:17:53.988593 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-19 04:17:53.988604 | orchestrator | Thursday 19 February 2026 04:17:50 +0000 (0:00:00.581) 0:01:11.818 ***** 2026-02-19 04:17:53.988647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:53.988670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:53.988682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-19 04:17:53.988694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:53.988706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:53.988722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:17:53.988742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:19:31.313738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:19:31.313865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-19 04:19:31.313881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:19:31.313892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:19:31.313918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-19 04:19:31.313948 | orchestrator | 2026-02-19 04:19:31.313962 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-19 04:19:31.313973 | orchestrator | Thursday 19 February 2026 04:17:54 +0000 (0:00:03.142) 0:01:14.960 ***** 2026-02-19 04:19:31.313983 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:19:31.313994 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:19:31.314003 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:19:31.314012 | orchestrator | 2026-02-19 04:19:31.314100 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-19 04:19:31.314149 | orchestrator | Thursday 19 February 2026 04:17:54 +0000 (0:00:00.301) 0:01:15.262 ***** 2026-02-19 04:19:31.314162 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:19:31.314172 | orchestrator | 2026-02-19 04:19:31.314200 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-19 04:19:31.314210 | orchestrator | Thursday 19 February 2026 04:17:56 +0000 (0:00:02.167) 0:01:17.430 ***** 2026-02-19 04:19:31.314220 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:19:31.314229 | orchestrator | 2026-02-19 04:19:31.314239 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-19 04:19:31.314248 | orchestrator | Thursday 19 February 2026 04:17:58 +0000 (0:00:02.195) 0:01:19.625 ***** 2026-02-19 04:19:31.314259 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:19:31.314273 | orchestrator | 2026-02-19 04:19:31.314289 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-19 04:19:31.314307 | orchestrator | Thursday 19 February 2026 04:18:18 +0000 (0:00:20.024) 0:01:39.650 ***** 2026-02-19 04:19:31.314324 | orchestrator | 2026-02-19 04:19:31.314343 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-19 04:19:31.314358 | orchestrator | Thursday 19 February 2026 04:18:18 +0000 (0:00:00.068) 0:01:39.718 ***** 2026-02-19 04:19:31.314384 | orchestrator | 2026-02-19 04:19:31.314402 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-19 04:19:31.314418 | orchestrator | Thursday 19 February 2026 04:18:18 +0000 (0:00:00.099) 0:01:39.818 ***** 2026-02-19 04:19:31.314433 | orchestrator | 2026-02-19 04:19:31.314449 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-19 04:19:31.314465 | orchestrator | Thursday 19 February 2026 04:18:19 +0000 (0:00:00.074) 0:01:39.892 ***** 2026-02-19 04:19:31.314482 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:19:31.314499 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:19:31.314515 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:19:31.314533 | orchestrator | 2026-02-19 04:19:31.314550 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-19 04:19:31.314568 | orchestrator | Thursday 19 February 2026 04:18:46 +0000 (0:00:27.268) 0:02:07.160 ***** 2026-02-19 04:19:31.314607 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:19:31.314619 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:19:31.314630 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:19:31.314641 | orchestrator | 2026-02-19 04:19:31.314652 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-19 04:19:31.314663 | orchestrator | Thursday 19 February 2026 04:18:56 +0000 (0:00:10.002) 0:02:17.163 ***** 2026-02-19 04:19:31.314672 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:19:31.314682 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:19:31.314691 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:19:31.314701 | orchestrator | 2026-02-19 04:19:31.314710 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-19 04:19:31.314719 | orchestrator | Thursday 19 February 2026 04:19:20 +0000 (0:00:24.007) 0:02:41.170 ***** 2026-02-19 04:19:31.314742 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:19:31.314751 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:19:31.314760 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:19:31.314770 | orchestrator | 2026-02-19 04:19:31.314779 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-19 04:19:31.314789 | orchestrator | Thursday 19 February 2026 04:19:31 +0000 (0:00:10.750) 0:02:51.920 ***** 2026-02-19 04:19:31.314798 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:19:31.314808 | orchestrator | 2026-02-19 04:19:31.314817 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:19:31.314828 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-19 04:19:31.314840 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 04:19:31.314849 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 04:19:31.314858 | orchestrator | 2026-02-19 04:19:31.314868 | orchestrator | 2026-02-19 04:19:31.314877 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:19:31.314887 | orchestrator | Thursday 19 February 2026 04:19:31 +0000 (0:00:00.254) 0:02:52.175 ***** 2026-02-19 04:19:31.314896 | orchestrator | =============================================================================== 2026-02-19 04:19:31.314905 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.27s 2026-02-19 04:19:31.314915 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.01s 2026-02-19 04:19:31.314931 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.02s 2026-02-19 04:19:31.314941 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.92s 2026-02-19 04:19:31.314950 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.75s 2026-02-19 04:19:31.314960 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.00s 2026-02-19 04:19:31.314969 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.64s 2026-02-19 04:19:31.314978 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.86s 2026-02-19 04:19:31.314987 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.31s 2026-02-19 04:19:31.314997 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.26s 2026-02-19 04:19:31.315006 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.07s 2026-02-19 04:19:31.315015 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.85s 2026-02-19 04:19:31.315025 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.57s 2026-02-19 04:19:31.315034 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.41s 2026-02-19 04:19:31.315053 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.36s 2026-02-19 04:19:31.670432 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.14s 2026-02-19 04:19:31.670501 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.69s 2026-02-19 04:19:31.670506 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.20s 2026-02-19 04:19:31.670511 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.17s 2026-02-19 04:19:31.670515 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.15s 2026-02-19 04:19:34.027041 | orchestrator | 2026-02-19 04:19:34 | INFO  | Task f5231f19-7300-4394-a972-06ab72ad8122 (barbican) was prepared for execution. 2026-02-19 04:19:34.027109 | orchestrator | 2026-02-19 04:19:34 | INFO  | It takes a moment until task f5231f19-7300-4394-a972-06ab72ad8122 (barbican) has been started and output is visible here. 2026-02-19 04:20:18.207144 | orchestrator | 2026-02-19 04:20:18.207255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:20:18.207270 | orchestrator | 2026-02-19 04:20:18.207280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:20:18.207291 | orchestrator | Thursday 19 February 2026 04:19:38 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-19 04:20:18.207301 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:20:18.207312 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:20:18.207322 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:20:18.207332 | orchestrator | 2026-02-19 04:20:18.207341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:20:18.207351 | orchestrator | Thursday 19 February 2026 04:19:38 +0000 (0:00:00.266) 0:00:00.529 ***** 2026-02-19 04:20:18.207361 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-19 04:20:18.207371 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-19 04:20:18.207381 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-19 04:20:18.207390 | orchestrator | 2026-02-19 04:20:18.207400 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-19 04:20:18.207410 | orchestrator | 2026-02-19 04:20:18.207419 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-19 04:20:18.207429 | orchestrator | Thursday 19 February 2026 04:19:38 +0000 (0:00:00.390) 0:00:00.920 ***** 2026-02-19 04:20:18.207439 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:20:18.207449 | orchestrator | 2026-02-19 04:20:18.207459 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-19 04:20:18.207468 | orchestrator | Thursday 19 February 2026 04:19:39 +0000 (0:00:00.481) 0:00:01.401 ***** 2026-02-19 04:20:18.207479 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-19 04:20:18.207488 | orchestrator | 2026-02-19 04:20:18.207497 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-19 04:20:18.207507 | orchestrator | Thursday 19 February 2026 04:19:42 +0000 (0:00:03.440) 0:00:04.842 ***** 2026-02-19 04:20:18.207516 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-19 04:20:18.207526 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-19 04:20:18.207535 | orchestrator | 2026-02-19 04:20:18.207545 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-19 04:20:18.207554 | orchestrator | Thursday 19 February 2026 04:19:49 +0000 (0:00:06.331) 0:00:11.173 ***** 2026-02-19 04:20:18.207564 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:20:18.207574 | orchestrator | 2026-02-19 04:20:18.207583 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-19 04:20:18.207592 | orchestrator | Thursday 19 February 2026 04:19:52 +0000 (0:00:03.371) 0:00:14.545 ***** 2026-02-19 04:20:18.207602 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:20:18.207611 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-19 04:20:18.207621 | orchestrator | 2026-02-19 04:20:18.207639 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-19 04:20:18.207657 | orchestrator | Thursday 19 February 2026 04:19:56 +0000 (0:00:04.106) 0:00:18.652 ***** 2026-02-19 04:20:18.207689 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:20:18.207744 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-19 04:20:18.207765 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-19 04:20:18.207781 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-19 04:20:18.207800 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-19 04:20:18.207817 | orchestrator | 2026-02-19 04:20:18.207833 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-19 04:20:18.207876 | orchestrator | Thursday 19 February 2026 04:20:12 +0000 (0:00:16.033) 0:00:34.685 ***** 2026-02-19 04:20:18.207893 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-19 04:20:18.207909 | orchestrator | 2026-02-19 04:20:18.207924 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-19 04:20:18.207941 | orchestrator | Thursday 19 February 2026 04:20:16 +0000 (0:00:04.036) 0:00:38.722 ***** 2026-02-19 04:20:18.207962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:18.208008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:18.208021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:18.208038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:18.208060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:18.208070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:18.208088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:24.114261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:24.114373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:24.114389 | orchestrator | 2026-02-19 04:20:24.114403 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-19 04:20:24.114416 | orchestrator | Thursday 19 February 2026 04:20:18 +0000 (0:00:01.598) 0:00:40.320 ***** 2026-02-19 04:20:24.114427 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-19 04:20:24.114439 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-19 04:20:24.114449 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-19 04:20:24.114460 | orchestrator | 2026-02-19 04:20:24.114487 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-19 04:20:24.114509 | orchestrator | Thursday 19 February 2026 04:20:19 +0000 (0:00:01.121) 0:00:41.441 ***** 2026-02-19 04:20:24.114546 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:20:24.114558 | orchestrator | 2026-02-19 04:20:24.114569 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-19 04:20:24.114580 | orchestrator | Thursday 19 February 2026 04:20:19 +0000 (0:00:00.323) 0:00:41.764 ***** 2026-02-19 04:20:24.114591 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:20:24.114601 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:20:24.114612 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:20:24.114622 | orchestrator | 2026-02-19 04:20:24.114633 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-19 04:20:24.114657 | orchestrator | Thursday 19 February 2026 04:20:19 +0000 (0:00:00.309) 0:00:42.073 ***** 2026-02-19 04:20:24.114671 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:20:24.114684 | orchestrator | 2026-02-19 04:20:24.114696 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-19 04:20:24.114709 | orchestrator | Thursday 19 February 2026 04:20:20 +0000 (0:00:00.562) 0:00:42.636 ***** 2026-02-19 04:20:24.114723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:24.114756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:24.114771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:24.114793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:24.114813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:24.114825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:24.114839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:24.114861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:25.458695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:25.458800 | orchestrator | 2026-02-19 04:20:25.458820 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-19 04:20:25.458859 | orchestrator | Thursday 19 February 2026 04:20:24 +0000 (0:00:03.585) 0:00:46.222 ***** 2026-02-19 04:20:25.458874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:25.458903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:25.458917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:25.458929 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:20:25.458942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:25.458972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:25.458993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:25.459005 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:20:25.459021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:25.459033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:25.459044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:25.459055 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:20:25.459067 | orchestrator | 2026-02-19 04:20:25.459078 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-19 04:20:25.459138 | orchestrator | Thursday 19 February 2026 04:20:24 +0000 (0:00:00.578) 0:00:46.801 ***** 2026-02-19 04:20:25.459162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:28.979614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:28.979723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:28.979740 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:20:28.979771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:28.979782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:28.979792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:28.979802 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:20:28.979853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:28.979865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:28.979880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:28.979890 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:20:28.979900 | orchestrator | 2026-02-19 04:20:28.979911 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-19 04:20:28.979923 | orchestrator | Thursday 19 February 2026 04:20:25 +0000 (0:00:00.775) 0:00:47.576 ***** 2026-02-19 04:20:28.979933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:28.979944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:28.979971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:38.132234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.132378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.132395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.132405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.132434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.132442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.132450 | orchestrator | 2026-02-19 04:20:38.132460 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-19 04:20:38.132468 | orchestrator | Thursday 19 February 2026 04:20:28 +0000 (0:00:03.516) 0:00:51.093 ***** 2026-02-19 04:20:38.132476 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:20:38.132484 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:20:38.132491 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:20:38.132499 | orchestrator | 2026-02-19 04:20:38.132522 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-19 04:20:38.132530 | orchestrator | Thursday 19 February 2026 04:20:30 +0000 (0:00:01.476) 0:00:52.570 ***** 2026-02-19 04:20:38.132538 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:20:38.132545 | orchestrator | 2026-02-19 04:20:38.132552 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-19 04:20:38.132559 | orchestrator | Thursday 19 February 2026 04:20:31 +0000 (0:00:00.914) 0:00:53.484 ***** 2026-02-19 04:20:38.132566 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:20:38.132573 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:20:38.132580 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:20:38.132587 | orchestrator | 2026-02-19 04:20:38.132594 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-19 04:20:38.132601 | orchestrator | Thursday 19 February 2026 04:20:31 +0000 (0:00:00.563) 0:00:54.047 ***** 2026-02-19 04:20:38.132633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:38.132643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:38.132659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:38.132673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.976989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.977148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.977164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.977188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.977192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:38.977196 | orchestrator | 2026-02-19 04:20:38.977202 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-19 04:20:38.977207 | orchestrator | Thursday 19 February 2026 04:20:38 +0000 (0:00:06.201) 0:01:00.249 ***** 2026-02-19 04:20:38.977223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:38.977231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:38.977235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:38.977239 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:20:38.977250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:38.977254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:38.977258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:38.977262 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:20:38.977272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-19 04:20:41.321835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:20:41.321936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:20:41.321968 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:20:41.321980 | orchestrator | 2026-02-19 04:20:41.321989 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-19 04:20:41.321998 | orchestrator | Thursday 19 February 2026 04:20:38 +0000 (0:00:00.840) 0:01:01.089 ***** 2026-02-19 04:20:41.322008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:41.322059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:41.322149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-19 04:20:41.322161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:41.322178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:41.322186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:41.322195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:41.322204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:41.322213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:20:41.322222 | orchestrator | 2026-02-19 04:20:41.322230 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-19 04:20:41.322244 | orchestrator | Thursday 19 February 2026 04:20:41 +0000 (0:00:02.342) 0:01:03.431 ***** 2026-02-19 04:21:23.537463 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:21:23.537579 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:21:23.537616 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:21:23.537629 | orchestrator | 2026-02-19 04:21:23.537642 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-19 04:21:23.537655 | orchestrator | Thursday 19 February 2026 04:20:41 +0000 (0:00:00.335) 0:01:03.767 ***** 2026-02-19 04:21:23.537666 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:21:23.537677 | orchestrator | 2026-02-19 04:21:23.537689 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-19 04:21:23.537700 | orchestrator | Thursday 19 February 2026 04:20:43 +0000 (0:00:02.158) 0:01:05.925 ***** 2026-02-19 04:21:23.537711 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:21:23.537722 | orchestrator | 2026-02-19 04:21:23.537733 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-19 04:21:23.537744 | orchestrator | Thursday 19 February 2026 04:20:46 +0000 (0:00:02.252) 0:01:08.178 ***** 2026-02-19 04:21:23.537754 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:21:23.537765 | orchestrator | 2026-02-19 04:21:23.537776 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-19 04:21:23.537787 | orchestrator | Thursday 19 February 2026 04:20:58 +0000 (0:00:12.657) 0:01:20.835 ***** 2026-02-19 04:21:23.537798 | orchestrator | 2026-02-19 04:21:23.537809 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-19 04:21:23.537820 | orchestrator | Thursday 19 February 2026 04:20:58 +0000 (0:00:00.067) 0:01:20.902 ***** 2026-02-19 04:21:23.537831 | orchestrator | 2026-02-19 04:21:23.537842 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-19 04:21:23.537853 | orchestrator | Thursday 19 February 2026 04:20:58 +0000 (0:00:00.068) 0:01:20.971 ***** 2026-02-19 04:21:23.537864 | orchestrator | 2026-02-19 04:21:23.537875 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-19 04:21:23.537886 | orchestrator | Thursday 19 February 2026 04:20:58 +0000 (0:00:00.070) 0:01:21.042 ***** 2026-02-19 04:21:23.537896 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:21:23.537907 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:21:23.537918 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:21:23.537929 | orchestrator | 2026-02-19 04:21:23.537940 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-19 04:21:23.537951 | orchestrator | Thursday 19 February 2026 04:21:10 +0000 (0:00:11.279) 0:01:32.322 ***** 2026-02-19 04:21:23.537962 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:21:23.537973 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:21:23.537984 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:21:23.537995 | orchestrator | 2026-02-19 04:21:23.538006 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-19 04:21:23.538122 | orchestrator | Thursday 19 February 2026 04:21:15 +0000 (0:00:04.807) 0:01:37.130 ***** 2026-02-19 04:21:23.538145 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:21:23.538163 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:21:23.538175 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:21:23.538185 | orchestrator | 2026-02-19 04:21:23.538196 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:21:23.538208 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 04:21:23.538221 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 04:21:23.538231 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 04:21:23.538242 | orchestrator | 2026-02-19 04:21:23.538253 | orchestrator | 2026-02-19 04:21:23.538263 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:21:23.538274 | orchestrator | Thursday 19 February 2026 04:21:23 +0000 (0:00:08.195) 0:01:45.326 ***** 2026-02-19 04:21:23.538295 | orchestrator | =============================================================================== 2026-02-19 04:21:23.538305 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.03s 2026-02-19 04:21:23.538316 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.66s 2026-02-19 04:21:23.538327 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.28s 2026-02-19 04:21:23.538337 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.20s 2026-02-19 04:21:23.538348 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.33s 2026-02-19 04:21:23.538358 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.20s 2026-02-19 04:21:23.538369 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.81s 2026-02-19 04:21:23.538379 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.11s 2026-02-19 04:21:23.538390 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.04s 2026-02-19 04:21:23.538400 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.59s 2026-02-19 04:21:23.538411 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.52s 2026-02-19 04:21:23.538421 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.44s 2026-02-19 04:21:23.538432 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.37s 2026-02-19 04:21:23.538443 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.34s 2026-02-19 04:21:23.538454 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.25s 2026-02-19 04:21:23.538483 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.16s 2026-02-19 04:21:23.538502 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.60s 2026-02-19 04:21:23.538513 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.48s 2026-02-19 04:21:23.538524 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.12s 2026-02-19 04:21:23.538535 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.91s 2026-02-19 04:21:25.816185 | orchestrator | 2026-02-19 04:21:25 | INFO  | Task e31ba12a-7870-4e27-b99c-faf85f31eaf1 (designate) was prepared for execution. 2026-02-19 04:21:25.816310 | orchestrator | 2026-02-19 04:21:25 | INFO  | It takes a moment until task e31ba12a-7870-4e27-b99c-faf85f31eaf1 (designate) has been started and output is visible here. 2026-02-19 04:21:57.893387 | orchestrator | 2026-02-19 04:21:57.893506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:21:57.893524 | orchestrator | 2026-02-19 04:21:57.893537 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:21:57.893548 | orchestrator | Thursday 19 February 2026 04:21:29 +0000 (0:00:00.261) 0:00:00.261 ***** 2026-02-19 04:21:57.893560 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:21:57.893572 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:21:57.893583 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:21:57.893594 | orchestrator | 2026-02-19 04:21:57.893605 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:21:57.893616 | orchestrator | Thursday 19 February 2026 04:21:30 +0000 (0:00:00.309) 0:00:00.571 ***** 2026-02-19 04:21:57.893628 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-19 04:21:57.893639 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-19 04:21:57.893650 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-19 04:21:57.893661 | orchestrator | 2026-02-19 04:21:57.893672 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-19 04:21:57.893683 | orchestrator | 2026-02-19 04:21:57.893694 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-19 04:21:57.893705 | orchestrator | Thursday 19 February 2026 04:21:30 +0000 (0:00:00.434) 0:00:01.006 ***** 2026-02-19 04:21:57.893740 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:21:57.893752 | orchestrator | 2026-02-19 04:21:57.893763 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-19 04:21:57.893773 | orchestrator | Thursday 19 February 2026 04:21:31 +0000 (0:00:00.584) 0:00:01.590 ***** 2026-02-19 04:21:57.893784 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-19 04:21:57.893794 | orchestrator | 2026-02-19 04:21:57.893805 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-19 04:21:57.893816 | orchestrator | Thursday 19 February 2026 04:21:34 +0000 (0:00:03.490) 0:00:05.080 ***** 2026-02-19 04:21:57.893826 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-19 04:21:57.893837 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-19 04:21:57.893848 | orchestrator | 2026-02-19 04:21:57.893858 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-19 04:21:57.893869 | orchestrator | Thursday 19 February 2026 04:21:41 +0000 (0:00:06.558) 0:00:11.639 ***** 2026-02-19 04:21:57.893880 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:21:57.893892 | orchestrator | 2026-02-19 04:21:57.893903 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-19 04:21:57.893916 | orchestrator | Thursday 19 February 2026 04:21:44 +0000 (0:00:03.295) 0:00:14.934 ***** 2026-02-19 04:21:57.893929 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:21:57.893941 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-19 04:21:57.893953 | orchestrator | 2026-02-19 04:21:57.893966 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-19 04:21:57.893978 | orchestrator | Thursday 19 February 2026 04:21:48 +0000 (0:00:04.087) 0:00:19.022 ***** 2026-02-19 04:21:57.893990 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:21:57.894002 | orchestrator | 2026-02-19 04:21:57.894129 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-19 04:21:57.894149 | orchestrator | Thursday 19 February 2026 04:21:52 +0000 (0:00:03.296) 0:00:22.319 ***** 2026-02-19 04:21:57.894162 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-19 04:21:57.894174 | orchestrator | 2026-02-19 04:21:57.894186 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-19 04:21:57.894198 | orchestrator | Thursday 19 February 2026 04:21:55 +0000 (0:00:03.815) 0:00:26.134 ***** 2026-02-19 04:21:57.894231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:21:57.894271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:21:57.894296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:21:57.894309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:21:57.894321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:21:57.894332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:21:57.894350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:21:57.894378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.112913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:04.113296 | orchestrator | 2026-02-19 04:22:04.113314 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-19 04:22:04.113326 | orchestrator | Thursday 19 February 2026 04:21:58 +0000 (0:00:02.793) 0:00:28.928 ***** 2026-02-19 04:22:04.113335 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:22:04.113345 | orchestrator | 2026-02-19 04:22:04.113353 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-19 04:22:04.113362 | orchestrator | Thursday 19 February 2026 04:21:58 +0000 (0:00:00.133) 0:00:29.061 ***** 2026-02-19 04:22:04.113375 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:22:04.113391 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:22:04.113405 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:22:04.113419 | orchestrator | 2026-02-19 04:22:04.113433 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-19 04:22:04.113447 | orchestrator | Thursday 19 February 2026 04:21:59 +0000 (0:00:00.522) 0:00:29.583 ***** 2026-02-19 04:22:04.113462 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:22:04.113490 | orchestrator | 2026-02-19 04:22:04.113506 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-19 04:22:04.113522 | orchestrator | Thursday 19 February 2026 04:21:59 +0000 (0:00:00.533) 0:00:30.117 ***** 2026-02-19 04:22:04.113578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:04.113611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:05.974686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:05.974789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:05.974806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:05.974866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:05.974892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:05.974935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:05.974960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:05.974980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:05.974994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:05.975015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:05.975098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:05.975114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:05.975139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:06.709321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:06.709422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:06.709439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:06.709477 | orchestrator | 2026-02-19 04:22:06.709491 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-19 04:22:06.709504 | orchestrator | Thursday 19 February 2026 04:22:05 +0000 (0:00:06.131) 0:00:36.248 ***** 2026-02-19 04:22:06.709532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:06.709546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:06.709576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:06.709589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:06.709601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:06.709622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:06.709634 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:22:06.709653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:06.709664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:06.709676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:06.709695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.361005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.361189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.361207 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:22:07.361248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:07.361260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:07.361271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.361282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.361309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.361334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.361345 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:22:07.361355 | orchestrator | 2026-02-19 04:22:07.361367 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-19 04:22:07.361378 | orchestrator | Thursday 19 February 2026 04:22:06 +0000 (0:00:00.835) 0:00:37.084 ***** 2026-02-19 04:22:07.361394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:07.361404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:07.361415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.361431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657695 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:22:07.657728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:07.657741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:07.657753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657838 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:22:07.657855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:07.657867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:07.657878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:07.657915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:12.166401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:12.166524 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:22:12.166545 | orchestrator | 2026-02-19 04:22:12.166561 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-19 04:22:12.166577 | orchestrator | Thursday 19 February 2026 04:22:07 +0000 (0:00:00.852) 0:00:37.936 ***** 2026-02-19 04:22:12.166611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:12.166628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:12.166642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:12.166700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:12.166718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:12.166740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:12.166755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:12.166770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:12.166785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:12.166811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:12.166837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:23.293730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:23.293865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:23.293884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:23.293896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:23.293929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:23.293943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:23.293982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:23.294000 | orchestrator | 2026-02-19 04:22:23.294144 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-19 04:22:23.294161 | orchestrator | Thursday 19 February 2026 04:22:14 +0000 (0:00:06.358) 0:00:44.295 ***** 2026-02-19 04:22:23.294180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:23.294193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:23.294214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:23.294225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:23.294247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:30.930787 | orchestrator | 2026-02-19 04:22:30.930801 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-19 04:22:30.930813 | orchestrator | Thursday 19 February 2026 04:22:27 +0000 (0:00:13.398) 0:00:57.693 ***** 2026-02-19 04:22:30.930831 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-19 04:22:34.839311 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-19 04:22:34.839394 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-19 04:22:34.839405 | orchestrator | 2026-02-19 04:22:34.839414 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-19 04:22:34.839422 | orchestrator | Thursday 19 February 2026 04:22:30 +0000 (0:00:03.516) 0:01:01.209 ***** 2026-02-19 04:22:34.839430 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-19 04:22:34.839437 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-19 04:22:34.839445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-19 04:22:34.839452 | orchestrator | 2026-02-19 04:22:34.839472 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-19 04:22:34.839480 | orchestrator | Thursday 19 February 2026 04:22:33 +0000 (0:00:02.282) 0:01:03.492 ***** 2026-02-19 04:22:34.839507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:34.839519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:34.839527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:34.839548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:34.839558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:34.839575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:34.839585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:34.839593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:34.839601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:34.839609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:34.839623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:37.408563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:37.408686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:37.408701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:37.408711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:37.408721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:37.408732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:37.408760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:37.408778 | orchestrator | 2026-02-19 04:22:37.408791 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-19 04:22:37.408802 | orchestrator | Thursday 19 February 2026 04:22:35 +0000 (0:00:02.645) 0:01:06.138 ***** 2026-02-19 04:22:37.408819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:37.408832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:37.408850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:37.408867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:37.408901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:38.386524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:38.386619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:38.386654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:38.386669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:38.386703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:38.386732 | orchestrator | 2026-02-19 04:22:38.386753 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-19 04:22:38.386785 | orchestrator | Thursday 19 February 2026 04:22:38 +0000 (0:00:02.519) 0:01:08.658 ***** 2026-02-19 04:22:39.362308 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:22:39.362408 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:22:39.362422 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:22:39.362434 | orchestrator | 2026-02-19 04:22:39.362464 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-19 04:22:39.362476 | orchestrator | Thursday 19 February 2026 04:22:38 +0000 (0:00:00.319) 0:01:08.977 ***** 2026-02-19 04:22:39.362491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:39.362507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:39.362520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:39.362533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:39.362568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:39.362603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:39.362616 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:22:39.362628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:39.362639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:39.362651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:39.362662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:39.362680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:39.362698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:42.662948 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:22:42.663158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-19 04:22:42.663190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 04:22:42.663209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 04:22:42.663226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 04:22:42.663265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 04:22:42.663276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:22:42.663285 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:22:42.663295 | orchestrator | 2026-02-19 04:22:42.663322 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-19 04:22:42.663338 | orchestrator | Thursday 19 February 2026 04:22:39 +0000 (0:00:00.775) 0:01:09.753 ***** 2026-02-19 04:22:42.663348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:42.663359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:42.663368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-19 04:22:42.663384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:42.663398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.447996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.448002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.448007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:22:44.448013 | orchestrator | 2026-02-19 04:22:44.448062 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-19 04:22:44.448070 | orchestrator | Thursday 19 February 2026 04:22:44 +0000 (0:00:04.653) 0:01:14.406 ***** 2026-02-19 04:22:44.448075 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:22:44.448085 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:24:01.914293 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:24:01.914410 | orchestrator | 2026-02-19 04:24:01.914443 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-19 04:24:01.914457 | orchestrator | Thursday 19 February 2026 04:22:44 +0000 (0:00:00.320) 0:01:14.727 ***** 2026-02-19 04:24:01.914468 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-19 04:24:01.914479 | orchestrator | 2026-02-19 04:24:01.914490 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-19 04:24:01.914500 | orchestrator | Thursday 19 February 2026 04:22:46 +0000 (0:00:02.154) 0:01:16.881 ***** 2026-02-19 04:24:01.914511 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-19 04:24:01.914523 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-19 04:24:01.914534 | orchestrator | 2026-02-19 04:24:01.914545 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-19 04:24:01.914555 | orchestrator | Thursday 19 February 2026 04:22:48 +0000 (0:00:02.207) 0:01:19.089 ***** 2026-02-19 04:24:01.914566 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:24:01.914576 | orchestrator | 2026-02-19 04:24:01.914587 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-19 04:24:01.914598 | orchestrator | Thursday 19 February 2026 04:23:04 +0000 (0:00:16.095) 0:01:35.184 ***** 2026-02-19 04:24:01.914608 | orchestrator | 2026-02-19 04:24:01.914619 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-19 04:24:01.914652 | orchestrator | Thursday 19 February 2026 04:23:04 +0000 (0:00:00.068) 0:01:35.253 ***** 2026-02-19 04:24:01.914664 | orchestrator | 2026-02-19 04:24:01.914674 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-19 04:24:01.914685 | orchestrator | Thursday 19 February 2026 04:23:05 +0000 (0:00:00.067) 0:01:35.321 ***** 2026-02-19 04:24:01.914695 | orchestrator | 2026-02-19 04:24:01.914706 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-19 04:24:01.914717 | orchestrator | Thursday 19 February 2026 04:23:05 +0000 (0:00:00.070) 0:01:35.391 ***** 2026-02-19 04:24:01.914728 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:24:01.914739 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:24:01.914749 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:24:01.914760 | orchestrator | 2026-02-19 04:24:01.914771 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-19 04:24:01.914781 | orchestrator | Thursday 19 February 2026 04:23:17 +0000 (0:00:12.657) 0:01:48.048 ***** 2026-02-19 04:24:01.914792 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:24:01.914803 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:24:01.914813 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:24:01.914824 | orchestrator | 2026-02-19 04:24:01.914837 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-19 04:24:01.914850 | orchestrator | Thursday 19 February 2026 04:23:23 +0000 (0:00:05.535) 0:01:53.583 ***** 2026-02-19 04:24:01.914863 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:24:01.914875 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:24:01.914888 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:24:01.914902 | orchestrator | 2026-02-19 04:24:01.914914 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-19 04:24:01.914928 | orchestrator | Thursday 19 February 2026 04:23:33 +0000 (0:00:10.551) 0:02:04.135 ***** 2026-02-19 04:24:01.914940 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:24:01.914953 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:24:01.914979 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:24:01.915013 | orchestrator | 2026-02-19 04:24:01.915027 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-19 04:24:01.915040 | orchestrator | Thursday 19 February 2026 04:23:39 +0000 (0:00:05.760) 0:02:09.895 ***** 2026-02-19 04:24:01.915052 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:24:01.915065 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:24:01.915078 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:24:01.915090 | orchestrator | 2026-02-19 04:24:01.915103 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-19 04:24:01.915116 | orchestrator | Thursday 19 February 2026 04:23:48 +0000 (0:00:08.892) 0:02:18.787 ***** 2026-02-19 04:24:01.915129 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:24:01.915142 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:24:01.915155 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:24:01.915166 | orchestrator | 2026-02-19 04:24:01.915177 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-19 04:24:01.915187 | orchestrator | Thursday 19 February 2026 04:23:54 +0000 (0:00:06.026) 0:02:24.814 ***** 2026-02-19 04:24:01.915198 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:24:01.915208 | orchestrator | 2026-02-19 04:24:01.915219 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:24:01.915232 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 04:24:01.915244 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 04:24:01.915255 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 04:24:01.915273 | orchestrator | 2026-02-19 04:24:01.915284 | orchestrator | 2026-02-19 04:24:01.915295 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:24:01.915305 | orchestrator | Thursday 19 February 2026 04:24:01 +0000 (0:00:07.112) 0:02:31.927 ***** 2026-02-19 04:24:01.915316 | orchestrator | =============================================================================== 2026-02-19 04:24:01.915326 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.10s 2026-02-19 04:24:01.915337 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.40s 2026-02-19 04:24:01.915365 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.66s 2026-02-19 04:24:01.915382 | orchestrator | designate : Restart designate-central container ------------------------ 10.55s 2026-02-19 04:24:01.915393 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.89s 2026-02-19 04:24:01.915404 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.11s 2026-02-19 04:24:01.915414 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.56s 2026-02-19 04:24:01.915425 | orchestrator | designate : Copying over config.json files for services ----------------- 6.36s 2026-02-19 04:24:01.915436 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.13s 2026-02-19 04:24:01.915446 | orchestrator | designate : Restart designate-worker container -------------------------- 6.03s 2026-02-19 04:24:01.915457 | orchestrator | designate : Restart designate-producer container ------------------------ 5.76s 2026-02-19 04:24:01.915467 | orchestrator | designate : Restart designate-api container ----------------------------- 5.54s 2026-02-19 04:24:01.915478 | orchestrator | designate : Check designate containers ---------------------------------- 4.65s 2026-02-19 04:24:01.915488 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.09s 2026-02-19 04:24:01.915499 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.82s 2026-02-19 04:24:01.915510 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.52s 2026-02-19 04:24:01.915520 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.49s 2026-02-19 04:24:01.915530 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.30s 2026-02-19 04:24:01.915541 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.30s 2026-02-19 04:24:01.915551 | orchestrator | designate : Ensuring config directories exist --------------------------- 2.79s 2026-02-19 04:24:04.068389 | orchestrator | 2026-02-19 04:24:04 | INFO  | Task 1d52dd80-6100-4f23-9fb7-dfcf0bdbb8fd (octavia) was prepared for execution. 2026-02-19 04:24:04.068489 | orchestrator | 2026-02-19 04:24:04 | INFO  | It takes a moment until task 1d52dd80-6100-4f23-9fb7-dfcf0bdbb8fd (octavia) has been started and output is visible here. 2026-02-19 04:26:09.860910 | orchestrator | 2026-02-19 04:26:09.861150 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:26:09.861187 | orchestrator | 2026-02-19 04:26:09.861208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:26:09.861231 | orchestrator | Thursday 19 February 2026 04:24:08 +0000 (0:00:00.276) 0:00:00.276 ***** 2026-02-19 04:26:09.861253 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:09.861275 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:26:09.861295 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:26:09.861315 | orchestrator | 2026-02-19 04:26:09.861336 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:26:09.861356 | orchestrator | Thursday 19 February 2026 04:24:08 +0000 (0:00:00.311) 0:00:00.588 ***** 2026-02-19 04:26:09.861377 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-19 04:26:09.861399 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-19 04:26:09.861420 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-19 04:26:09.861441 | orchestrator | 2026-02-19 04:26:09.861462 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-19 04:26:09.861516 | orchestrator | 2026-02-19 04:26:09.861540 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-19 04:26:09.861562 | orchestrator | Thursday 19 February 2026 04:24:09 +0000 (0:00:00.446) 0:00:01.034 ***** 2026-02-19 04:26:09.861584 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:26:09.861606 | orchestrator | 2026-02-19 04:26:09.861621 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-19 04:26:09.861634 | orchestrator | Thursday 19 February 2026 04:24:09 +0000 (0:00:00.566) 0:00:01.601 ***** 2026-02-19 04:26:09.861647 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-19 04:26:09.861660 | orchestrator | 2026-02-19 04:26:09.861679 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-19 04:26:09.861698 | orchestrator | Thursday 19 February 2026 04:24:13 +0000 (0:00:03.491) 0:00:05.092 ***** 2026-02-19 04:26:09.861717 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-19 04:26:09.861738 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-19 04:26:09.861757 | orchestrator | 2026-02-19 04:26:09.861776 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-19 04:26:09.861796 | orchestrator | Thursday 19 February 2026 04:24:19 +0000 (0:00:06.612) 0:00:11.705 ***** 2026-02-19 04:26:09.861814 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:26:09.861832 | orchestrator | 2026-02-19 04:26:09.861851 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-19 04:26:09.861870 | orchestrator | Thursday 19 February 2026 04:24:22 +0000 (0:00:03.124) 0:00:14.830 ***** 2026-02-19 04:26:09.861888 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:26:09.861907 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-19 04:26:09.861925 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-19 04:26:09.861944 | orchestrator | 2026-02-19 04:26:09.861962 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-19 04:26:09.862011 | orchestrator | Thursday 19 February 2026 04:24:31 +0000 (0:00:08.446) 0:00:23.276 ***** 2026-02-19 04:26:09.862132 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:26:09.862153 | orchestrator | 2026-02-19 04:26:09.862171 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-19 04:26:09.862186 | orchestrator | Thursday 19 February 2026 04:24:34 +0000 (0:00:03.382) 0:00:26.658 ***** 2026-02-19 04:26:09.862197 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-19 04:26:09.862208 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-19 04:26:09.862219 | orchestrator | 2026-02-19 04:26:09.862229 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-19 04:26:09.862240 | orchestrator | Thursday 19 February 2026 04:24:42 +0000 (0:00:07.492) 0:00:34.151 ***** 2026-02-19 04:26:09.862252 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-19 04:26:09.862271 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-19 04:26:09.862289 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-19 04:26:09.862307 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-19 04:26:09.862325 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-19 04:26:09.862344 | orchestrator | 2026-02-19 04:26:09.862362 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-19 04:26:09.862380 | orchestrator | Thursday 19 February 2026 04:24:58 +0000 (0:00:15.992) 0:00:50.143 ***** 2026-02-19 04:26:09.862398 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:26:09.862431 | orchestrator | 2026-02-19 04:26:09.862452 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-19 04:26:09.862471 | orchestrator | Thursday 19 February 2026 04:24:58 +0000 (0:00:00.732) 0:00:50.876 ***** 2026-02-19 04:26:09.862489 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.862509 | orchestrator | 2026-02-19 04:26:09.862527 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-19 04:26:09.862546 | orchestrator | Thursday 19 February 2026 04:25:03 +0000 (0:00:04.834) 0:00:55.710 ***** 2026-02-19 04:26:09.862566 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.862584 | orchestrator | 2026-02-19 04:26:09.862602 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-19 04:26:09.862646 | orchestrator | Thursday 19 February 2026 04:25:07 +0000 (0:00:03.789) 0:00:59.499 ***** 2026-02-19 04:26:09.862667 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:09.862685 | orchestrator | 2026-02-19 04:26:09.862702 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-19 04:26:09.862713 | orchestrator | Thursday 19 February 2026 04:25:10 +0000 (0:00:03.026) 0:01:02.526 ***** 2026-02-19 04:26:09.862724 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-19 04:26:09.862735 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-19 04:26:09.862745 | orchestrator | 2026-02-19 04:26:09.862756 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-19 04:26:09.862767 | orchestrator | Thursday 19 February 2026 04:25:21 +0000 (0:00:10.597) 0:01:13.124 ***** 2026-02-19 04:26:09.862785 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-19 04:26:09.862811 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-19 04:26:09.862833 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-19 04:26:09.862853 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-19 04:26:09.862871 | orchestrator | 2026-02-19 04:26:09.862888 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-19 04:26:09.862910 | orchestrator | Thursday 19 February 2026 04:25:36 +0000 (0:00:15.743) 0:01:28.867 ***** 2026-02-19 04:26:09.862929 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.862946 | orchestrator | 2026-02-19 04:26:09.862964 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-19 04:26:09.863011 | orchestrator | Thursday 19 February 2026 04:25:41 +0000 (0:00:04.562) 0:01:33.429 ***** 2026-02-19 04:26:09.863027 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.863045 | orchestrator | 2026-02-19 04:26:09.863063 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-19 04:26:09.863083 | orchestrator | Thursday 19 February 2026 04:25:46 +0000 (0:00:05.284) 0:01:38.714 ***** 2026-02-19 04:26:09.863101 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:26:09.863120 | orchestrator | 2026-02-19 04:26:09.863131 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-19 04:26:09.863142 | orchestrator | Thursday 19 February 2026 04:25:46 +0000 (0:00:00.202) 0:01:38.917 ***** 2026-02-19 04:26:09.863153 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:09.863164 | orchestrator | 2026-02-19 04:26:09.863175 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-19 04:26:09.863186 | orchestrator | Thursday 19 February 2026 04:25:51 +0000 (0:00:04.819) 0:01:43.736 ***** 2026-02-19 04:26:09.863197 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:26:09.863208 | orchestrator | 2026-02-19 04:26:09.863219 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-19 04:26:09.863243 | orchestrator | Thursday 19 February 2026 04:25:52 +0000 (0:00:01.093) 0:01:44.830 ***** 2026-02-19 04:26:09.863257 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:26:09.863276 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.863313 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:26:09.863334 | orchestrator | 2026-02-19 04:26:09.863353 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-19 04:26:09.863371 | orchestrator | Thursday 19 February 2026 04:25:57 +0000 (0:00:05.153) 0:01:49.983 ***** 2026-02-19 04:26:09.863391 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:26:09.863409 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.863428 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:26:09.863443 | orchestrator | 2026-02-19 04:26:09.863454 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-19 04:26:09.863465 | orchestrator | Thursday 19 February 2026 04:26:02 +0000 (0:00:04.316) 0:01:54.300 ***** 2026-02-19 04:26:09.863476 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.863486 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:26:09.863497 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:26:09.863508 | orchestrator | 2026-02-19 04:26:09.863518 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-19 04:26:09.863529 | orchestrator | Thursday 19 February 2026 04:26:03 +0000 (0:00:01.027) 0:01:55.327 ***** 2026-02-19 04:26:09.863540 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:26:09.863551 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:09.863561 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:26:09.863572 | orchestrator | 2026-02-19 04:26:09.863582 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-19 04:26:09.863593 | orchestrator | Thursday 19 February 2026 04:26:05 +0000 (0:00:01.929) 0:01:57.257 ***** 2026-02-19 04:26:09.863603 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:26:09.863614 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.863625 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:26:09.863635 | orchestrator | 2026-02-19 04:26:09.863646 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-19 04:26:09.863657 | orchestrator | Thursday 19 February 2026 04:26:06 +0000 (0:00:01.196) 0:01:58.454 ***** 2026-02-19 04:26:09.863667 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.863678 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:26:09.863688 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:26:09.863699 | orchestrator | 2026-02-19 04:26:09.863709 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-19 04:26:09.863720 | orchestrator | Thursday 19 February 2026 04:26:07 +0000 (0:00:01.159) 0:01:59.613 ***** 2026-02-19 04:26:09.863730 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:09.863741 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:26:09.863751 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:26:09.863762 | orchestrator | 2026-02-19 04:26:09.863785 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-19 04:26:36.957477 | orchestrator | Thursday 19 February 2026 04:26:09 +0000 (0:00:02.240) 0:02:01.853 ***** 2026-02-19 04:26:36.957558 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:26:36.957565 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:26:36.957570 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:26:36.957574 | orchestrator | 2026-02-19 04:26:36.957579 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-19 04:26:36.957583 | orchestrator | Thursday 19 February 2026 04:26:11 +0000 (0:00:01.493) 0:02:03.346 ***** 2026-02-19 04:26:36.957587 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:36.957592 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:26:36.957596 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:26:36.957600 | orchestrator | 2026-02-19 04:26:36.957604 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-19 04:26:36.957608 | orchestrator | Thursday 19 February 2026 04:26:12 +0000 (0:00:00.669) 0:02:04.016 ***** 2026-02-19 04:26:36.957627 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:26:36.957631 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:26:36.957635 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:36.957638 | orchestrator | 2026-02-19 04:26:36.957642 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-19 04:26:36.957646 | orchestrator | Thursday 19 February 2026 04:26:15 +0000 (0:00:03.035) 0:02:07.052 ***** 2026-02-19 04:26:36.957651 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:26:36.957655 | orchestrator | 2026-02-19 04:26:36.957659 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-19 04:26:36.957662 | orchestrator | Thursday 19 February 2026 04:26:15 +0000 (0:00:00.516) 0:02:07.568 ***** 2026-02-19 04:26:36.957666 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:36.957670 | orchestrator | 2026-02-19 04:26:36.957674 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-19 04:26:36.957678 | orchestrator | Thursday 19 February 2026 04:26:19 +0000 (0:00:04.244) 0:02:11.813 ***** 2026-02-19 04:26:36.957682 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:36.957685 | orchestrator | 2026-02-19 04:26:36.957689 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-19 04:26:36.957693 | orchestrator | Thursday 19 February 2026 04:26:23 +0000 (0:00:03.320) 0:02:15.134 ***** 2026-02-19 04:26:36.957697 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-19 04:26:36.957701 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-19 04:26:36.957705 | orchestrator | 2026-02-19 04:26:36.957709 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-19 04:26:36.957713 | orchestrator | Thursday 19 February 2026 04:26:30 +0000 (0:00:07.427) 0:02:22.561 ***** 2026-02-19 04:26:36.957716 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:36.957720 | orchestrator | 2026-02-19 04:26:36.957724 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-19 04:26:36.957728 | orchestrator | Thursday 19 February 2026 04:26:34 +0000 (0:00:03.955) 0:02:26.516 ***** 2026-02-19 04:26:36.957731 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:26:36.957735 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:26:36.957739 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:26:36.957743 | orchestrator | 2026-02-19 04:26:36.957747 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-19 04:26:36.957751 | orchestrator | Thursday 19 February 2026 04:26:35 +0000 (0:00:00.500) 0:02:27.017 ***** 2026-02-19 04:26:36.957768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:36.957784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:36.957793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:36.957798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:36.957804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:36.957810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:36.957815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:36.957821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:36.957832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:38.407584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:38.407691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:38.407707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:38.407751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:26:38.407774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:26:38.407820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:26:38.407837 | orchestrator | 2026-02-19 04:26:38.407858 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-19 04:26:38.407877 | orchestrator | Thursday 19 February 2026 04:26:37 +0000 (0:00:02.413) 0:02:29.431 ***** 2026-02-19 04:26:38.407896 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:26:38.407918 | orchestrator | 2026-02-19 04:26:38.407936 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-19 04:26:38.407955 | orchestrator | Thursday 19 February 2026 04:26:37 +0000 (0:00:00.128) 0:02:29.560 ***** 2026-02-19 04:26:38.408003 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:26:38.408044 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:26:38.408065 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:26:38.408083 | orchestrator | 2026-02-19 04:26:38.408102 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-19 04:26:38.408121 | orchestrator | Thursday 19 February 2026 04:26:37 +0000 (0:00:00.298) 0:02:29.859 ***** 2026-02-19 04:26:38.408142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:38.408158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:38.408180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:38.408195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:38.408219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:38.408230 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:26:38.408253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:43.085122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:43.085265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:43.085317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:43.085370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:43.085389 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:26:43.085409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:43.085430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:43.085475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:43.085498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:43.085529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:43.085562 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:26:43.085576 | orchestrator | 2026-02-19 04:26:43.085591 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-19 04:26:43.085605 | orchestrator | Thursday 19 February 2026 04:26:38 +0000 (0:00:00.654) 0:02:30.513 ***** 2026-02-19 04:26:43.085619 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:26:43.085631 | orchestrator | 2026-02-19 04:26:43.085644 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-19 04:26:43.085656 | orchestrator | Thursday 19 February 2026 04:26:39 +0000 (0:00:00.699) 0:02:31.213 ***** 2026-02-19 04:26:43.085670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:43.085686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:43.085709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:44.624096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:44.624255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:44.624274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:44.624287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:26:44.624424 | orchestrator | 2026-02-19 04:26:44.624437 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-19 04:26:44.624450 | orchestrator | Thursday 19 February 2026 04:26:44 +0000 (0:00:04.870) 0:02:36.083 ***** 2026-02-19 04:26:44.624470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:44.719584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:44.719730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:44.719767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:44.719796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:44.719816 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:26:44.719837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:44.719857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:44.719995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:44.720024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:44.720036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:44.720047 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:26:44.720059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:44.720071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:44.720082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:44.720114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:45.522424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:45.522527 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:26:45.522543 | orchestrator | 2026-02-19 04:26:45.522556 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-19 04:26:45.522568 | orchestrator | Thursday 19 February 2026 04:26:44 +0000 (0:00:00.640) 0:02:36.723 ***** 2026-02-19 04:26:45.522581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:45.522595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:45.522607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:45.522640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:45.522671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:45.522683 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:26:45.522701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:45.522713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:45.522724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:45.522735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:45.522754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:45.522765 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:26:45.522789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 04:26:50.056523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 04:26:50.056605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 04:26:50.056615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 04:26:50.056621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 04:26:50.056642 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:26:50.056649 | orchestrator | 2026-02-19 04:26:50.056655 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-19 04:26:50.056661 | orchestrator | Thursday 19 February 2026 04:26:46 +0000 (0:00:01.299) 0:02:38.023 ***** 2026-02-19 04:26:50.056667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:50.056694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:50.056700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:26:50.056705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:50.056714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:50.056719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:26:50.056724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:26:50.056795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:05.640394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:05.640508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:05.640524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:05.640560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:05.640573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:27:05.640585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:27:05.640630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:27:05.640643 | orchestrator | 2026-02-19 04:27:05.640686 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-19 04:27:05.640700 | orchestrator | Thursday 19 February 2026 04:26:50 +0000 (0:00:04.966) 0:02:42.990 ***** 2026-02-19 04:27:05.640711 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-19 04:27:05.640723 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-19 04:27:05.640733 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-19 04:27:05.640744 | orchestrator | 2026-02-19 04:27:05.640755 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-19 04:27:05.640766 | orchestrator | Thursday 19 February 2026 04:26:52 +0000 (0:00:01.550) 0:02:44.540 ***** 2026-02-19 04:27:05.640778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:27:05.640799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:27:05.640810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:27:05.640834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:27:20.730068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:27:20.730193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:27:20.730237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:27:20.730415 | orchestrator | 2026-02-19 04:27:20.730428 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-19 04:27:20.730441 | orchestrator | Thursday 19 February 2026 04:27:08 +0000 (0:00:16.256) 0:03:00.796 ***** 2026-02-19 04:27:20.730452 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:27:20.730465 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:27:20.730476 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:27:20.730489 | orchestrator | 2026-02-19 04:27:20.730501 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-19 04:27:20.730513 | orchestrator | Thursday 19 February 2026 04:27:10 +0000 (0:00:01.727) 0:03:02.524 ***** 2026-02-19 04:27:20.730525 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-19 04:27:20.730537 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-19 04:27:20.730549 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-19 04:27:20.730561 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-19 04:27:20.730573 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-19 04:27:20.730585 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-19 04:27:20.730597 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-19 04:27:20.730609 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-19 04:27:20.730620 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-19 04:27:20.730632 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-19 04:27:20.730645 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-19 04:27:20.730657 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-19 04:27:20.730669 | orchestrator | 2026-02-19 04:27:20.730686 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-19 04:27:20.730698 | orchestrator | Thursday 19 February 2026 04:27:15 +0000 (0:00:05.194) 0:03:07.718 ***** 2026-02-19 04:27:20.730711 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-19 04:27:20.730723 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-19 04:27:20.730753 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-19 04:27:29.178561 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-19 04:27:29.178653 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-19 04:27:29.178664 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-19 04:27:29.178672 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-19 04:27:29.178680 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-19 04:27:29.178688 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-19 04:27:29.178695 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-19 04:27:29.178703 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-19 04:27:29.178711 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-19 04:27:29.178719 | orchestrator | 2026-02-19 04:27:29.178728 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-19 04:27:29.178736 | orchestrator | Thursday 19 February 2026 04:27:20 +0000 (0:00:05.001) 0:03:12.720 ***** 2026-02-19 04:27:29.178743 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-19 04:27:29.178750 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-19 04:27:29.178757 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-19 04:27:29.178764 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-19 04:27:29.178771 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-19 04:27:29.178778 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-19 04:27:29.178785 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-19 04:27:29.178792 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-19 04:27:29.178799 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-19 04:27:29.178806 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-19 04:27:29.178814 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-19 04:27:29.178821 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-19 04:27:29.178828 | orchestrator | 2026-02-19 04:27:29.178835 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-19 04:27:29.178842 | orchestrator | Thursday 19 February 2026 04:27:25 +0000 (0:00:05.231) 0:03:17.951 ***** 2026-02-19 04:27:29.178852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:27:29.178865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:27:29.178923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 04:27:29.178935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:27:29.178944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:27:29.178998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-19 04:27:29.179008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:29.179017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:29.179035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-19 04:27:29.179049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:28:47.807096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:28:47.807197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-19 04:28:47.807210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:28:47.807219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:28:47.807247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-19 04:28:47.807256 | orchestrator | 2026-02-19 04:28:47.807266 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-19 04:28:47.807274 | orchestrator | Thursday 19 February 2026 04:27:30 +0000 (0:00:04.087) 0:03:22.039 ***** 2026-02-19 04:28:47.807282 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:28:47.807304 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:28:47.807312 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:28:47.807319 | orchestrator | 2026-02-19 04:28:47.807327 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-19 04:28:47.807334 | orchestrator | Thursday 19 February 2026 04:27:30 +0000 (0:00:00.307) 0:03:22.346 ***** 2026-02-19 04:28:47.807342 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807349 | orchestrator | 2026-02-19 04:28:47.807356 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-19 04:28:47.807363 | orchestrator | Thursday 19 February 2026 04:27:32 +0000 (0:00:02.223) 0:03:24.570 ***** 2026-02-19 04:28:47.807370 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807378 | orchestrator | 2026-02-19 04:28:47.807385 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-19 04:28:47.807392 | orchestrator | Thursday 19 February 2026 04:27:34 +0000 (0:00:02.097) 0:03:26.668 ***** 2026-02-19 04:28:47.807399 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807406 | orchestrator | 2026-02-19 04:28:47.807414 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-19 04:28:47.807422 | orchestrator | Thursday 19 February 2026 04:27:37 +0000 (0:00:02.355) 0:03:29.024 ***** 2026-02-19 04:28:47.807442 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807450 | orchestrator | 2026-02-19 04:28:47.807457 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-19 04:28:47.807464 | orchestrator | Thursday 19 February 2026 04:27:39 +0000 (0:00:02.272) 0:03:31.297 ***** 2026-02-19 04:28:47.807471 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807478 | orchestrator | 2026-02-19 04:28:47.807486 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-19 04:28:47.807493 | orchestrator | Thursday 19 February 2026 04:28:02 +0000 (0:00:23.414) 0:03:54.711 ***** 2026-02-19 04:28:47.807500 | orchestrator | 2026-02-19 04:28:47.807507 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-19 04:28:47.807514 | orchestrator | Thursday 19 February 2026 04:28:02 +0000 (0:00:00.067) 0:03:54.779 ***** 2026-02-19 04:28:47.807521 | orchestrator | 2026-02-19 04:28:47.807528 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-19 04:28:47.807535 | orchestrator | Thursday 19 February 2026 04:28:02 +0000 (0:00:00.066) 0:03:54.846 ***** 2026-02-19 04:28:47.807542 | orchestrator | 2026-02-19 04:28:47.807549 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-19 04:28:47.807556 | orchestrator | Thursday 19 February 2026 04:28:02 +0000 (0:00:00.066) 0:03:54.912 ***** 2026-02-19 04:28:47.807563 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807572 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:28:47.807580 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:28:47.807588 | orchestrator | 2026-02-19 04:28:47.807603 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-19 04:28:47.807612 | orchestrator | Thursday 19 February 2026 04:28:19 +0000 (0:00:16.528) 0:04:11.441 ***** 2026-02-19 04:28:47.807620 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807628 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:28:47.807636 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:28:47.807645 | orchestrator | 2026-02-19 04:28:47.807653 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-19 04:28:47.807662 | orchestrator | Thursday 19 February 2026 04:28:30 +0000 (0:00:11.399) 0:04:22.841 ***** 2026-02-19 04:28:47.807670 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807679 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:28:47.807687 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:28:47.807695 | orchestrator | 2026-02-19 04:28:47.807703 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-19 04:28:47.807711 | orchestrator | Thursday 19 February 2026 04:28:36 +0000 (0:00:05.435) 0:04:28.276 ***** 2026-02-19 04:28:47.807719 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807727 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:28:47.807735 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:28:47.807744 | orchestrator | 2026-02-19 04:28:47.807752 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-19 04:28:47.807761 | orchestrator | Thursday 19 February 2026 04:28:41 +0000 (0:00:05.540) 0:04:33.817 ***** 2026-02-19 04:28:47.807769 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:28:47.807778 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:28:47.807786 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:28:47.807794 | orchestrator | 2026-02-19 04:28:47.807802 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:28:47.807812 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 04:28:47.807822 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 04:28:47.807831 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 04:28:47.807839 | orchestrator | 2026-02-19 04:28:47.807847 | orchestrator | 2026-02-19 04:28:47.807855 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:28:47.807868 | orchestrator | Thursday 19 February 2026 04:28:47 +0000 (0:00:05.978) 0:04:39.795 ***** 2026-02-19 04:28:47.807879 | orchestrator | =============================================================================== 2026-02-19 04:28:47.807898 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.41s 2026-02-19 04:28:47.807912 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.53s 2026-02-19 04:28:47.807924 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.26s 2026-02-19 04:28:47.807936 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.99s 2026-02-19 04:28:47.807955 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.74s 2026-02-19 04:28:47.807967 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.40s 2026-02-19 04:28:47.808004 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.60s 2026-02-19 04:28:47.808016 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.45s 2026-02-19 04:28:47.808029 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.49s 2026-02-19 04:28:47.808041 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.43s 2026-02-19 04:28:47.808054 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.61s 2026-02-19 04:28:47.808065 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.98s 2026-02-19 04:28:47.808084 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.54s 2026-02-19 04:28:47.808092 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.44s 2026-02-19 04:28:47.808106 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.28s 2026-02-19 04:28:48.150544 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.23s 2026-02-19 04:28:48.150648 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.19s 2026-02-19 04:28:48.150663 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.15s 2026-02-19 04:28:48.150674 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.00s 2026-02-19 04:28:48.150685 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.97s 2026-02-19 04:28:50.563468 | orchestrator | 2026-02-19 04:28:50 | INFO  | Task 73e1a0c7-5798-4266-ad41-fd069a41a46c (ceilometer) was prepared for execution. 2026-02-19 04:28:50.563570 | orchestrator | 2026-02-19 04:28:50 | INFO  | It takes a moment until task 73e1a0c7-5798-4266-ad41-fd069a41a46c (ceilometer) has been started and output is visible here. 2026-02-19 04:29:13.608322 | orchestrator | 2026-02-19 04:29:13.608445 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:29:13.608463 | orchestrator | 2026-02-19 04:29:13.608476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:29:13.608488 | orchestrator | Thursday 19 February 2026 04:28:54 +0000 (0:00:00.256) 0:00:00.256 ***** 2026-02-19 04:29:13.608499 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:29:13.608512 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:29:13.608523 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:29:13.608534 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:29:13.608545 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:29:13.608556 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:29:13.608566 | orchestrator | 2026-02-19 04:29:13.608582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:29:13.608602 | orchestrator | Thursday 19 February 2026 04:28:55 +0000 (0:00:00.693) 0:00:00.950 ***** 2026-02-19 04:29:13.608621 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-19 04:29:13.608644 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-19 04:29:13.608664 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-19 04:29:13.608680 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-19 04:29:13.608691 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-19 04:29:13.608702 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-19 04:29:13.608713 | orchestrator | 2026-02-19 04:29:13.608724 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-19 04:29:13.608734 | orchestrator | 2026-02-19 04:29:13.608745 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-19 04:29:13.608756 | orchestrator | Thursday 19 February 2026 04:28:55 +0000 (0:00:00.627) 0:00:01.577 ***** 2026-02-19 04:29:13.608768 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:29:13.608780 | orchestrator | 2026-02-19 04:29:13.608791 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-19 04:29:13.608802 | orchestrator | Thursday 19 February 2026 04:28:57 +0000 (0:00:01.206) 0:00:02.784 ***** 2026-02-19 04:29:13.608812 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:13.608823 | orchestrator | 2026-02-19 04:29:13.608834 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-19 04:29:13.608847 | orchestrator | Thursday 19 February 2026 04:28:57 +0000 (0:00:00.122) 0:00:02.907 ***** 2026-02-19 04:29:13.608859 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:13.608896 | orchestrator | 2026-02-19 04:29:13.608909 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-19 04:29:13.608922 | orchestrator | Thursday 19 February 2026 04:28:57 +0000 (0:00:00.113) 0:00:03.020 ***** 2026-02-19 04:29:13.608934 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:29:13.608947 | orchestrator | 2026-02-19 04:29:13.608959 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-19 04:29:13.608972 | orchestrator | Thursday 19 February 2026 04:29:01 +0000 (0:00:03.622) 0:00:06.642 ***** 2026-02-19 04:29:13.608984 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:29:13.608996 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-19 04:29:13.609053 | orchestrator | 2026-02-19 04:29:13.609076 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-19 04:29:13.609098 | orchestrator | Thursday 19 February 2026 04:29:04 +0000 (0:00:03.890) 0:00:10.533 ***** 2026-02-19 04:29:13.609138 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:29:13.609158 | orchestrator | 2026-02-19 04:29:13.609178 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-19 04:29:13.609197 | orchestrator | Thursday 19 February 2026 04:29:08 +0000 (0:00:03.124) 0:00:13.658 ***** 2026-02-19 04:29:13.609216 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-19 04:29:13.609234 | orchestrator | 2026-02-19 04:29:13.609251 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-19 04:29:13.609270 | orchestrator | Thursday 19 February 2026 04:29:11 +0000 (0:00:03.800) 0:00:17.458 ***** 2026-02-19 04:29:13.609290 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:13.609309 | orchestrator | 2026-02-19 04:29:13.609327 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-19 04:29:13.609346 | orchestrator | Thursday 19 February 2026 04:29:11 +0000 (0:00:00.129) 0:00:17.588 ***** 2026-02-19 04:29:13.609369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:13.609421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:13.609444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:13.609483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:13.609505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:13.609525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:13.609545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:13.609568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:18.193746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:18.193863 | orchestrator | 2026-02-19 04:29:18.193879 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-19 04:29:18.193891 | orchestrator | Thursday 19 February 2026 04:29:13 +0000 (0:00:01.597) 0:00:19.185 ***** 2026-02-19 04:29:18.193901 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 04:29:18.193913 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:29:18.193923 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 04:29:18.193932 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 04:29:18.193941 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 04:29:18.193951 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:29:18.193961 | orchestrator | 2026-02-19 04:29:18.193971 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-19 04:29:18.193982 | orchestrator | Thursday 19 February 2026 04:29:15 +0000 (0:00:01.620) 0:00:20.806 ***** 2026-02-19 04:29:18.193992 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:29:18.194002 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:29:18.194148 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:29:18.194159 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:29:18.194168 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:29:18.194178 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:29:18.194187 | orchestrator | 2026-02-19 04:29:18.194197 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-19 04:29:18.194206 | orchestrator | Thursday 19 February 2026 04:29:15 +0000 (0:00:00.607) 0:00:21.413 ***** 2026-02-19 04:29:18.194216 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:18.194226 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:18.194236 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:18.194246 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:18.194256 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:18.194265 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:18.194275 | orchestrator | 2026-02-19 04:29:18.194285 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-19 04:29:18.194295 | orchestrator | Thursday 19 February 2026 04:29:16 +0000 (0:00:00.750) 0:00:22.164 ***** 2026-02-19 04:29:18.194305 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:29:18.194314 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:29:18.194323 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:29:18.194333 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:29:18.194342 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:29:18.194384 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:29:18.194395 | orchestrator | 2026-02-19 04:29:18.194409 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-19 04:29:18.194419 | orchestrator | Thursday 19 February 2026 04:29:17 +0000 (0:00:00.602) 0:00:22.766 ***** 2026-02-19 04:29:18.194430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:18.194442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:18.194463 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:18.194492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:18.194503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:18.194513 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:18.194523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:18.194534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:18.194549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:18.194560 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:18.194570 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:18.194580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:18.194597 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:18.194614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:22.862707 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:22.862822 | orchestrator | 2026-02-19 04:29:22.862840 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-19 04:29:22.862854 | orchestrator | Thursday 19 February 2026 04:29:18 +0000 (0:00:01.013) 0:00:23.780 ***** 2026-02-19 04:29:22.862868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:22.862883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:22.862896 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:22.862923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:22.862936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:22.862968 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:22.862981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:22.862993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:22.863004 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:22.863092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:22.863105 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:22.863117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:22.863128 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:22.863145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:22.863156 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:22.863176 | orchestrator | 2026-02-19 04:29:22.863189 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-19 04:29:22.863201 | orchestrator | Thursday 19 February 2026 04:29:19 +0000 (0:00:00.854) 0:00:24.635 ***** 2026-02-19 04:29:22.863213 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:29:22.863224 | orchestrator | 2026-02-19 04:29:22.863235 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-19 04:29:22.863249 | orchestrator | Thursday 19 February 2026 04:29:19 +0000 (0:00:00.720) 0:00:25.356 ***** 2026-02-19 04:29:22.863261 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:29:22.863274 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:29:22.863286 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:29:22.863298 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:29:22.863310 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:29:22.863322 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:29:22.863335 | orchestrator | 2026-02-19 04:29:22.863347 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-19 04:29:22.863359 | orchestrator | Thursday 19 February 2026 04:29:20 +0000 (0:00:00.771) 0:00:26.128 ***** 2026-02-19 04:29:22.863371 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:29:22.863383 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:29:22.863395 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:29:22.863407 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:29:22.863419 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:29:22.863431 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:29:22.863443 | orchestrator | 2026-02-19 04:29:22.863455 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-19 04:29:22.863475 | orchestrator | Thursday 19 February 2026 04:29:21 +0000 (0:00:00.927) 0:00:27.055 ***** 2026-02-19 04:29:22.863503 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:22.863525 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:22.863544 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:22.863562 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:22.863579 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:22.863598 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:22.863617 | orchestrator | 2026-02-19 04:29:22.863634 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-19 04:29:22.863653 | orchestrator | Thursday 19 February 2026 04:29:22 +0000 (0:00:00.795) 0:00:27.850 ***** 2026-02-19 04:29:22.863671 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:22.863691 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:22.863709 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:22.863727 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:22.863741 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:22.863752 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:22.863762 | orchestrator | 2026-02-19 04:29:27.622437 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-19 04:29:27.622545 | orchestrator | Thursday 19 February 2026 04:29:22 +0000 (0:00:00.604) 0:00:28.454 ***** 2026-02-19 04:29:27.622562 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:29:27.622581 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 04:29:27.622601 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 04:29:27.622618 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:29:27.622637 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 04:29:27.622654 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 04:29:27.622673 | orchestrator | 2026-02-19 04:29:27.622693 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-19 04:29:27.622712 | orchestrator | Thursday 19 February 2026 04:29:24 +0000 (0:00:01.420) 0:00:29.875 ***** 2026-02-19 04:29:27.622736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:27.622794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:27.622816 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:27.622853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:27.622917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:27.622940 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:27.622961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:27.623006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:27.623058 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:27.623074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:27.623101 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:27.623115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:27.623128 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:27.623149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:27.623168 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:27.623185 | orchestrator | 2026-02-19 04:29:27.623203 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-19 04:29:27.623221 | orchestrator | Thursday 19 February 2026 04:29:25 +0000 (0:00:00.805) 0:00:30.680 ***** 2026-02-19 04:29:27.623240 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:27.623259 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:27.623276 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:27.623295 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:27.623313 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:27.623332 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:27.623343 | orchestrator | 2026-02-19 04:29:27.623354 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-19 04:29:27.623368 | orchestrator | Thursday 19 February 2026 04:29:25 +0000 (0:00:00.777) 0:00:31.458 ***** 2026-02-19 04:29:27.623386 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:29:27.623403 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 04:29:27.623422 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 04:29:27.623440 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:29:27.623459 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 04:29:27.623470 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 04:29:27.623481 | orchestrator | 2026-02-19 04:29:27.623492 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-19 04:29:27.623503 | orchestrator | Thursday 19 February 2026 04:29:27 +0000 (0:00:01.293) 0:00:32.752 ***** 2026-02-19 04:29:27.623527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.437137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:33.437305 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:33.437341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.437398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:33.437413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.437425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:33.437439 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:33.437458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.437508 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:33.437529 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:33.437568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.437581 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:33.437592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.437603 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:33.437614 | orchestrator | 2026-02-19 04:29:33.437627 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-19 04:29:33.437643 | orchestrator | Thursday 19 February 2026 04:29:28 +0000 (0:00:01.141) 0:00:33.894 ***** 2026-02-19 04:29:33.437663 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:33.437691 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:33.437722 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:33.437740 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:33.437759 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:33.437775 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:33.437794 | orchestrator | 2026-02-19 04:29:33.437813 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-19 04:29:33.437831 | orchestrator | Thursday 19 February 2026 04:29:29 +0000 (0:00:00.811) 0:00:34.706 ***** 2026-02-19 04:29:33.437850 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:33.437869 | orchestrator | 2026-02-19 04:29:33.437882 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-19 04:29:33.437896 | orchestrator | Thursday 19 February 2026 04:29:29 +0000 (0:00:00.143) 0:00:34.849 ***** 2026-02-19 04:29:33.437910 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:33.437922 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:33.437935 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:33.437947 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:33.437959 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:33.437971 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:33.437983 | orchestrator | 2026-02-19 04:29:33.437994 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-19 04:29:33.438142 | orchestrator | Thursday 19 February 2026 04:29:29 +0000 (0:00:00.605) 0:00:35.454 ***** 2026-02-19 04:29:33.438160 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:29:33.438173 | orchestrator | 2026-02-19 04:29:33.438183 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-19 04:29:33.438194 | orchestrator | Thursday 19 February 2026 04:29:31 +0000 (0:00:01.308) 0:00:36.763 ***** 2026-02-19 04:29:33.438206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:33.438232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:33.987691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:33.987790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:33.987823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:33.987856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:33.987869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:33.987880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:33.987908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:33.987920 | orchestrator | 2026-02-19 04:29:33.987932 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-19 04:29:33.987943 | orchestrator | Thursday 19 February 2026 04:29:33 +0000 (0:00:02.260) 0:00:39.024 ***** 2026-02-19 04:29:33.987954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.987969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:33.987988 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:33.988000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.988010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:33.988020 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:33.988073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:33.988092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:35.799909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:35.800011 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:35.800073 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:35.800106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:35.800140 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:35.800152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:35.800163 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:35.800174 | orchestrator | 2026-02-19 04:29:35.800186 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-19 04:29:35.800199 | orchestrator | Thursday 19 February 2026 04:29:34 +0000 (0:00:00.858) 0:00:39.883 ***** 2026-02-19 04:29:35.800211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:35.800224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:35.800254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:35.800267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:35.800291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:35.800302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:35.800313 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:35.800324 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:35.800335 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:35.800347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:35.800358 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:35.800369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:35.800380 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:35.800400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:43.175733 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:43.175840 | orchestrator | 2026-02-19 04:29:43.175857 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-19 04:29:43.175870 | orchestrator | Thursday 19 February 2026 04:29:35 +0000 (0:00:01.500) 0:00:41.384 ***** 2026-02-19 04:29:43.175901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:43.175916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:43.175928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:43.175940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:43.175953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:43.175984 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:43.176024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:43.176100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:43.176114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:43.176125 | orchestrator | 2026-02-19 04:29:43.176137 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-19 04:29:43.176148 | orchestrator | Thursday 19 February 2026 04:29:38 +0000 (0:00:02.479) 0:00:43.863 ***** 2026-02-19 04:29:43.176159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:43.176171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:43.176199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:52.481919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:52.482143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:52.482165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:52.482178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:52.482191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:52.482229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:52.482242 | orchestrator | 2026-02-19 04:29:52.482255 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-19 04:29:52.482269 | orchestrator | Thursday 19 February 2026 04:29:43 +0000 (0:00:04.899) 0:00:48.763 ***** 2026-02-19 04:29:52.482299 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:29:52.482313 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 04:29:52.482324 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 04:29:52.482335 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:29:52.482345 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 04:29:52.482356 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 04:29:52.482367 | orchestrator | 2026-02-19 04:29:52.482378 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-19 04:29:52.482389 | orchestrator | Thursday 19 February 2026 04:29:44 +0000 (0:00:01.596) 0:00:50.359 ***** 2026-02-19 04:29:52.482400 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:52.482418 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:52.482432 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:52.482444 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:52.482456 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:52.482468 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:52.482481 | orchestrator | 2026-02-19 04:29:52.482494 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-19 04:29:52.482507 | orchestrator | Thursday 19 February 2026 04:29:45 +0000 (0:00:00.582) 0:00:50.942 ***** 2026-02-19 04:29:52.482518 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:52.482529 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:52.482539 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:52.482550 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:29:52.482561 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:29:52.482572 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:29:52.482582 | orchestrator | 2026-02-19 04:29:52.482593 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-19 04:29:52.482604 | orchestrator | Thursday 19 February 2026 04:29:47 +0000 (0:00:01.665) 0:00:52.608 ***** 2026-02-19 04:29:52.482615 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:52.482625 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:52.482636 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:52.482647 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:29:52.482658 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:29:52.482669 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:29:52.482679 | orchestrator | 2026-02-19 04:29:52.482690 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-19 04:29:52.482701 | orchestrator | Thursday 19 February 2026 04:29:48 +0000 (0:00:01.424) 0:00:54.032 ***** 2026-02-19 04:29:52.482712 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:29:52.482723 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 04:29:52.482733 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 04:29:52.482744 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:29:52.482755 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 04:29:52.482765 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 04:29:52.482784 | orchestrator | 2026-02-19 04:29:52.482794 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-19 04:29:52.482806 | orchestrator | Thursday 19 February 2026 04:29:49 +0000 (0:00:01.516) 0:00:55.549 ***** 2026-02-19 04:29:52.482817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:52.482830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:52.482841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:52.482865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:53.318780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:53.318905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:29:53.318965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:53.318989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:53.319009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:29:53.319029 | orchestrator | 2026-02-19 04:29:53.319080 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-19 04:29:53.319102 | orchestrator | Thursday 19 February 2026 04:29:52 +0000 (0:00:02.519) 0:00:58.069 ***** 2026-02-19 04:29:53.319141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:53.319186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:53.319207 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:53.319227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:53.319258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:53.319276 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:53.319295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:53.319314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:53.319334 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:53.319356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:53.319384 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:53.319415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:56.732380 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:56.732486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:56.732504 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:56.732517 | orchestrator | 2026-02-19 04:29:56.732529 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-19 04:29:56.732541 | orchestrator | Thursday 19 February 2026 04:29:53 +0000 (0:00:00.840) 0:00:58.909 ***** 2026-02-19 04:29:56.732552 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:56.732563 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:56.732574 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:56.732584 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:56.732595 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:56.732606 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:56.732617 | orchestrator | 2026-02-19 04:29:56.732628 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-19 04:29:56.732640 | orchestrator | Thursday 19 February 2026 04:29:54 +0000 (0:00:00.791) 0:00:59.701 ***** 2026-02-19 04:29:56.732652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:56.732665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:56.732677 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:29:56.732688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:56.732717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:56.732763 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:29:56.732794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-19 04:29:56.732807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 04:29:56.732818 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:29:56.732829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:56.732840 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:29:56.732851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:56.732862 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:29:56.732879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-19 04:29:56.732898 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:29:56.732910 | orchestrator | 2026-02-19 04:29:56.732922 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-19 04:29:56.732935 | orchestrator | Thursday 19 February 2026 04:29:54 +0000 (0:00:00.860) 0:01:00.561 ***** 2026-02-19 04:29:56.732956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:30:24.600284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:30:24.600364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:30:24.600373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-19 04:30:24.600379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:30:24.600415 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-19 04:30:24.600423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:30:24.600441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:30:24.600448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-19 04:30:24.600454 | orchestrator | 2026-02-19 04:30:24.600461 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-19 04:30:24.600468 | orchestrator | Thursday 19 February 2026 04:29:56 +0000 (0:00:01.759) 0:01:02.321 ***** 2026-02-19 04:30:24.600473 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:30:24.600480 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:30:24.600486 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:30:24.600491 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:30:24.600496 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:30:24.600501 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:30:24.600507 | orchestrator | 2026-02-19 04:30:24.600512 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-19 04:30:24.600518 | orchestrator | Thursday 19 February 2026 04:29:57 +0000 (0:00:00.604) 0:01:02.926 ***** 2026-02-19 04:30:24.600523 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:30:24.600528 | orchestrator | 2026-02-19 04:30:24.600534 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-19 04:30:24.600539 | orchestrator | Thursday 19 February 2026 04:30:02 +0000 (0:00:04.960) 0:01:07.886 ***** 2026-02-19 04:30:24.600544 | orchestrator | 2026-02-19 04:30:24.600550 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-19 04:30:24.600560 | orchestrator | Thursday 19 February 2026 04:30:02 +0000 (0:00:00.090) 0:01:07.977 ***** 2026-02-19 04:30:24.600565 | orchestrator | 2026-02-19 04:30:24.600571 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-19 04:30:24.600576 | orchestrator | Thursday 19 February 2026 04:30:02 +0000 (0:00:00.080) 0:01:08.057 ***** 2026-02-19 04:30:24.600581 | orchestrator | 2026-02-19 04:30:24.600586 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-19 04:30:24.600592 | orchestrator | Thursday 19 February 2026 04:30:02 +0000 (0:00:00.251) 0:01:08.309 ***** 2026-02-19 04:30:24.600597 | orchestrator | 2026-02-19 04:30:24.600603 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-19 04:30:24.600608 | orchestrator | Thursday 19 February 2026 04:30:02 +0000 (0:00:00.071) 0:01:08.381 ***** 2026-02-19 04:30:24.600613 | orchestrator | 2026-02-19 04:30:24.600619 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-19 04:30:24.600624 | orchestrator | Thursday 19 February 2026 04:30:02 +0000 (0:00:00.066) 0:01:08.448 ***** 2026-02-19 04:30:24.600629 | orchestrator | 2026-02-19 04:30:24.600634 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-19 04:30:24.600640 | orchestrator | Thursday 19 February 2026 04:30:02 +0000 (0:00:00.075) 0:01:08.523 ***** 2026-02-19 04:30:24.600645 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:30:24.600650 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:30:24.600656 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:30:24.600661 | orchestrator | 2026-02-19 04:30:24.600666 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-19 04:30:24.600675 | orchestrator | Thursday 19 February 2026 04:30:08 +0000 (0:00:05.544) 0:01:14.067 ***** 2026-02-19 04:30:24.600680 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:30:24.600686 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:30:24.600691 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:30:24.600696 | orchestrator | 2026-02-19 04:30:24.600702 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-19 04:30:24.600707 | orchestrator | Thursday 19 February 2026 04:30:18 +0000 (0:00:09.561) 0:01:23.629 ***** 2026-02-19 04:30:24.600712 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:30:24.600717 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:30:24.600723 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:30:24.600728 | orchestrator | 2026-02-19 04:30:24.600733 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:30:24.600739 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-19 04:30:24.600746 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 04:30:24.600755 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 04:30:25.111553 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-19 04:30:25.111630 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-19 04:30:25.111639 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-19 04:30:25.111646 | orchestrator | 2026-02-19 04:30:25.111652 | orchestrator | 2026-02-19 04:30:25.111658 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:30:25.111666 | orchestrator | Thursday 19 February 2026 04:30:24 +0000 (0:00:06.552) 0:01:30.182 ***** 2026-02-19 04:30:25.111672 | orchestrator | =============================================================================== 2026-02-19 04:30:25.111697 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.56s 2026-02-19 04:30:25.111704 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.55s 2026-02-19 04:30:25.111710 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.54s 2026-02-19 04:30:25.111715 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.96s 2026-02-19 04:30:25.111721 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.90s 2026-02-19 04:30:25.111727 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.89s 2026-02-19 04:30:25.111733 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.80s 2026-02-19 04:30:25.111739 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.62s 2026-02-19 04:30:25.111744 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.12s 2026-02-19 04:30:25.111750 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.52s 2026-02-19 04:30:25.111756 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.48s 2026-02-19 04:30:25.111761 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.26s 2026-02-19 04:30:25.111767 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.76s 2026-02-19 04:30:25.111773 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.67s 2026-02-19 04:30:25.111779 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.62s 2026-02-19 04:30:25.111785 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.60s 2026-02-19 04:30:25.111791 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.60s 2026-02-19 04:30:25.111796 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.52s 2026-02-19 04:30:25.111802 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.50s 2026-02-19 04:30:25.111808 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.42s 2026-02-19 04:30:27.559524 | orchestrator | 2026-02-19 04:30:27 | INFO  | Task 39127c5a-edaf-4d64-9ad8-d85c09bb7fd2 (aodh) was prepared for execution. 2026-02-19 04:30:27.559617 | orchestrator | 2026-02-19 04:30:27 | INFO  | It takes a moment until task 39127c5a-edaf-4d64-9ad8-d85c09bb7fd2 (aodh) has been started and output is visible here. 2026-02-19 04:31:00.397021 | orchestrator | 2026-02-19 04:31:00.397169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:31:00.397182 | orchestrator | 2026-02-19 04:31:00.397191 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:31:00.397200 | orchestrator | Thursday 19 February 2026 04:30:31 +0000 (0:00:00.259) 0:00:00.259 ***** 2026-02-19 04:31:00.397207 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:31:00.397216 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:31:00.397223 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:31:00.397231 | orchestrator | 2026-02-19 04:31:00.397238 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:31:00.397259 | orchestrator | Thursday 19 February 2026 04:30:32 +0000 (0:00:00.315) 0:00:00.574 ***** 2026-02-19 04:31:00.397266 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-19 04:31:00.397274 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-19 04:31:00.397281 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-19 04:31:00.397288 | orchestrator | 2026-02-19 04:31:00.397296 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-19 04:31:00.397302 | orchestrator | 2026-02-19 04:31:00.397309 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-19 04:31:00.397317 | orchestrator | Thursday 19 February 2026 04:30:32 +0000 (0:00:00.430) 0:00:01.005 ***** 2026-02-19 04:31:00.397323 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:31:00.397349 | orchestrator | 2026-02-19 04:31:00.397357 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-19 04:31:00.397364 | orchestrator | Thursday 19 February 2026 04:30:33 +0000 (0:00:00.568) 0:00:01.573 ***** 2026-02-19 04:31:00.397371 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-19 04:31:00.397379 | orchestrator | 2026-02-19 04:31:00.397386 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-19 04:31:00.397393 | orchestrator | Thursday 19 February 2026 04:30:36 +0000 (0:00:03.542) 0:00:05.116 ***** 2026-02-19 04:31:00.397400 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-19 04:31:00.397408 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-19 04:31:00.397415 | orchestrator | 2026-02-19 04:31:00.397422 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-19 04:31:00.397429 | orchestrator | Thursday 19 February 2026 04:30:43 +0000 (0:00:06.604) 0:00:11.720 ***** 2026-02-19 04:31:00.397436 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:31:00.397444 | orchestrator | 2026-02-19 04:31:00.397451 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-19 04:31:00.397459 | orchestrator | Thursday 19 February 2026 04:30:46 +0000 (0:00:03.616) 0:00:15.337 ***** 2026-02-19 04:31:00.397465 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:31:00.397473 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-19 04:31:00.397480 | orchestrator | 2026-02-19 04:31:00.397487 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-19 04:31:00.397494 | orchestrator | Thursday 19 February 2026 04:30:50 +0000 (0:00:03.991) 0:00:19.329 ***** 2026-02-19 04:31:00.397502 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:31:00.397509 | orchestrator | 2026-02-19 04:31:00.397516 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-19 04:31:00.397523 | orchestrator | Thursday 19 February 2026 04:30:54 +0000 (0:00:03.528) 0:00:22.858 ***** 2026-02-19 04:31:00.397530 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-19 04:31:00.397538 | orchestrator | 2026-02-19 04:31:00.397545 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-19 04:31:00.397552 | orchestrator | Thursday 19 February 2026 04:30:58 +0000 (0:00:04.048) 0:00:26.906 ***** 2026-02-19 04:31:00.397562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:00.397589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:00.397606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:00.397615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:00.397624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:00.397632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:00.397641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:00.397654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:01.750963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:01.751069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:01.751114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:01.751127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:01.751138 | orchestrator | 2026-02-19 04:31:01.751153 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-19 04:31:01.751165 | orchestrator | Thursday 19 February 2026 04:31:00 +0000 (0:00:02.003) 0:00:28.910 ***** 2026-02-19 04:31:01.751176 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:31:01.751189 | orchestrator | 2026-02-19 04:31:01.751200 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-19 04:31:01.751210 | orchestrator | Thursday 19 February 2026 04:31:00 +0000 (0:00:00.169) 0:00:29.080 ***** 2026-02-19 04:31:01.751221 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:31:01.751232 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:31:01.751243 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:31:01.751254 | orchestrator | 2026-02-19 04:31:01.751265 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-19 04:31:01.751275 | orchestrator | Thursday 19 February 2026 04:31:01 +0000 (0:00:00.519) 0:00:29.599 ***** 2026-02-19 04:31:01.751288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:01.751349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:01.751364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:01.751375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:01.751386 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:31:01.751398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:01.751410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:01.751421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:01.751448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:06.849837 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:31:06.850162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:06.850195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:06.850210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:06.850221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:06.850233 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:31:06.850245 | orchestrator | 2026-02-19 04:31:06.850258 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-19 04:31:06.850293 | orchestrator | Thursday 19 February 2026 04:31:01 +0000 (0:00:00.671) 0:00:30.271 ***** 2026-02-19 04:31:06.850305 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:31:06.850316 | orchestrator | 2026-02-19 04:31:06.850327 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-19 04:31:06.850338 | orchestrator | Thursday 19 February 2026 04:31:02 +0000 (0:00:00.791) 0:00:31.062 ***** 2026-02-19 04:31:06.850350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:06.850388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:06.850402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:06.850414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:06.850425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:06.850444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:06.850455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:06.850480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:07.495277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:07.495368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:07.495382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:07.495414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:07.495425 | orchestrator | 2026-02-19 04:31:07.495436 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-19 04:31:07.495446 | orchestrator | Thursday 19 February 2026 04:31:06 +0000 (0:00:04.307) 0:00:35.370 ***** 2026-02-19 04:31:07.495457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:07.495481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:07.495507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:07.495517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:07.495527 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:31:07.495537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:07.495553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:07.495562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:07.495571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:07.495584 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:31:07.495602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:08.539535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:08.539636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:08.539676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:08.539690 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:31:08.539705 | orchestrator | 2026-02-19 04:31:08.539717 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-19 04:31:08.539730 | orchestrator | Thursday 19 February 2026 04:31:07 +0000 (0:00:00.643) 0:00:36.014 ***** 2026-02-19 04:31:08.539742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:08.539770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:08.539782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:08.539829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:08.539862 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:31:08.539875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:08.539887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:08.539898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:08.539909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:08.539926 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:31:08.539946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-19 04:31:12.627981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 04:31:12.628159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 04:31:12.628181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 04:31:12.628195 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:31:12.628209 | orchestrator | 2026-02-19 04:31:12.628222 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-19 04:31:12.628234 | orchestrator | Thursday 19 February 2026 04:31:08 +0000 (0:00:01.044) 0:00:37.058 ***** 2026-02-19 04:31:12.628246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:12.628274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:12.628303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:12.628322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:12.628338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:12.628354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:12.628367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:12.628381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:12.628392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:12.628415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:21.374983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375162 | orchestrator | 2026-02-19 04:31:21.375176 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-19 04:31:21.375189 | orchestrator | Thursday 19 February 2026 04:31:12 +0000 (0:00:04.084) 0:00:41.143 ***** 2026-02-19 04:31:21.375202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:21.375235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:21.375269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:21.375300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:21.375404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299508 | orchestrator | 2026-02-19 04:31:26.299532 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-19 04:31:26.299553 | orchestrator | Thursday 19 February 2026 04:31:21 +0000 (0:00:08.746) 0:00:49.890 ***** 2026-02-19 04:31:26.299571 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:31:26.299591 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:31:26.299608 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:31:26.299626 | orchestrator | 2026-02-19 04:31:26.299645 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-19 04:31:26.299665 | orchestrator | Thursday 19 February 2026 04:31:23 +0000 (0:00:01.747) 0:00:51.637 ***** 2026-02-19 04:31:26.299686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:26.299728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:26.299777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-19 04:31:26.299811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:31:26.299905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:32:21.477946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-19 04:32:21.478114 | orchestrator | 2026-02-19 04:32:21.478208 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-19 04:32:21.478228 | orchestrator | Thursday 19 February 2026 04:31:26 +0000 (0:00:03.176) 0:00:54.814 ***** 2026-02-19 04:32:21.478245 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:32:21.478262 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:32:21.478295 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:32:21.478313 | orchestrator | 2026-02-19 04:32:21.478330 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-19 04:32:21.478347 | orchestrator | Thursday 19 February 2026 04:31:26 +0000 (0:00:00.314) 0:00:55.129 ***** 2026-02-19 04:32:21.478364 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:32:21.478376 | orchestrator | 2026-02-19 04:32:21.478386 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-19 04:32:21.478395 | orchestrator | Thursday 19 February 2026 04:31:28 +0000 (0:00:02.243) 0:00:57.372 ***** 2026-02-19 04:32:21.478428 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:32:21.478439 | orchestrator | 2026-02-19 04:32:21.478448 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-19 04:32:21.478458 | orchestrator | Thursday 19 February 2026 04:31:31 +0000 (0:00:02.346) 0:00:59.719 ***** 2026-02-19 04:32:21.478467 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:32:21.478477 | orchestrator | 2026-02-19 04:32:21.478489 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-19 04:32:21.478500 | orchestrator | Thursday 19 February 2026 04:31:45 +0000 (0:00:13.934) 0:01:13.653 ***** 2026-02-19 04:32:21.478511 | orchestrator | 2026-02-19 04:32:21.478521 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-19 04:32:21.478532 | orchestrator | Thursday 19 February 2026 04:31:45 +0000 (0:00:00.070) 0:01:13.723 ***** 2026-02-19 04:32:21.478542 | orchestrator | 2026-02-19 04:32:21.478553 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-19 04:32:21.478565 | orchestrator | Thursday 19 February 2026 04:31:45 +0000 (0:00:00.070) 0:01:13.793 ***** 2026-02-19 04:32:21.478575 | orchestrator | 2026-02-19 04:32:21.478586 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-19 04:32:21.478612 | orchestrator | Thursday 19 February 2026 04:31:45 +0000 (0:00:00.270) 0:01:14.064 ***** 2026-02-19 04:32:21.478624 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:32:21.478635 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:32:21.478645 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:32:21.478657 | orchestrator | 2026-02-19 04:32:21.478668 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-19 04:32:21.478677 | orchestrator | Thursday 19 February 2026 04:31:55 +0000 (0:00:10.453) 0:01:24.518 ***** 2026-02-19 04:32:21.478687 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:32:21.478696 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:32:21.478706 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:32:21.478715 | orchestrator | 2026-02-19 04:32:21.478724 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-19 04:32:21.478734 | orchestrator | Thursday 19 February 2026 04:32:05 +0000 (0:00:09.787) 0:01:34.305 ***** 2026-02-19 04:32:21.478743 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:32:21.478753 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:32:21.478762 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:32:21.478772 | orchestrator | 2026-02-19 04:32:21.478781 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-19 04:32:21.478791 | orchestrator | Thursday 19 February 2026 04:32:10 +0000 (0:00:05.060) 0:01:39.366 ***** 2026-02-19 04:32:21.478800 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:32:21.478810 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:32:21.478819 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:32:21.478828 | orchestrator | 2026-02-19 04:32:21.478838 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:32:21.478848 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 04:32:21.478860 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 04:32:21.478870 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 04:32:21.478942 | orchestrator | 2026-02-19 04:32:21.478954 | orchestrator | 2026-02-19 04:32:21.478964 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:32:21.478973 | orchestrator | Thursday 19 February 2026 04:32:21 +0000 (0:00:10.283) 0:01:49.649 ***** 2026-02-19 04:32:21.478983 | orchestrator | =============================================================================== 2026-02-19 04:32:21.479002 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.93s 2026-02-19 04:32:21.479012 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.45s 2026-02-19 04:32:21.479042 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.28s 2026-02-19 04:32:21.479053 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 9.79s 2026-02-19 04:32:21.479062 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.75s 2026-02-19 04:32:21.479072 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.60s 2026-02-19 04:32:21.479082 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 5.06s 2026-02-19 04:32:21.479099 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.31s 2026-02-19 04:32:21.479148 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.08s 2026-02-19 04:32:21.479166 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 4.05s 2026-02-19 04:32:21.479181 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.99s 2026-02-19 04:32:21.479196 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.62s 2026-02-19 04:32:21.479211 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.54s 2026-02-19 04:32:21.479227 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.53s 2026-02-19 04:32:21.479242 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.18s 2026-02-19 04:32:21.479257 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.35s 2026-02-19 04:32:21.479274 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.24s 2026-02-19 04:32:21.479289 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.00s 2026-02-19 04:32:21.479304 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.75s 2026-02-19 04:32:21.479338 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.04s 2026-02-19 04:32:23.741414 | orchestrator | 2026-02-19 04:32:23 | INFO  | Task 94ed1f10-a4ae-439b-88de-e4698eef71b7 (kolla-ceph-rgw) was prepared for execution. 2026-02-19 04:32:23.741481 | orchestrator | 2026-02-19 04:32:23 | INFO  | It takes a moment until task 94ed1f10-a4ae-439b-88de-e4698eef71b7 (kolla-ceph-rgw) has been started and output is visible here. 2026-02-19 04:32:59.178071 | orchestrator | 2026-02-19 04:32:59.178261 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:32:59.178282 | orchestrator | 2026-02-19 04:32:59.178292 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:32:59.178305 | orchestrator | Thursday 19 February 2026 04:32:27 +0000 (0:00:00.272) 0:00:00.272 ***** 2026-02-19 04:32:59.178317 | orchestrator | ok: [testbed-manager] 2026-02-19 04:32:59.178329 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:32:59.178356 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:32:59.178368 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:32:59.178379 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:32:59.178390 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:32:59.178401 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:32:59.178412 | orchestrator | 2026-02-19 04:32:59.178422 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:32:59.178433 | orchestrator | Thursday 19 February 2026 04:32:28 +0000 (0:00:00.860) 0:00:01.133 ***** 2026-02-19 04:32:59.178445 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-19 04:32:59.178456 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-19 04:32:59.178467 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-19 04:32:59.178478 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-19 04:32:59.178490 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-19 04:32:59.178522 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-19 04:32:59.178535 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-19 04:32:59.178545 | orchestrator | 2026-02-19 04:32:59.178556 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-19 04:32:59.178567 | orchestrator | 2026-02-19 04:32:59.178579 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-19 04:32:59.178590 | orchestrator | Thursday 19 February 2026 04:32:29 +0000 (0:00:00.796) 0:00:01.929 ***** 2026-02-19 04:32:59.178602 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:32:59.178615 | orchestrator | 2026-02-19 04:32:59.178626 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-19 04:32:59.178661 | orchestrator | Thursday 19 February 2026 04:32:31 +0000 (0:00:01.582) 0:00:03.512 ***** 2026-02-19 04:32:59.178673 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-19 04:32:59.178682 | orchestrator | 2026-02-19 04:32:59.178691 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-19 04:32:59.178699 | orchestrator | Thursday 19 February 2026 04:32:34 +0000 (0:00:03.744) 0:00:07.257 ***** 2026-02-19 04:32:59.178710 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-19 04:32:59.178722 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-19 04:32:59.178732 | orchestrator | 2026-02-19 04:32:59.178740 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-19 04:32:59.178749 | orchestrator | Thursday 19 February 2026 04:32:40 +0000 (0:00:06.055) 0:00:13.312 ***** 2026-02-19 04:32:59.178758 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-19 04:32:59.178767 | orchestrator | 2026-02-19 04:32:59.178776 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-19 04:32:59.178785 | orchestrator | Thursday 19 February 2026 04:32:43 +0000 (0:00:03.041) 0:00:16.354 ***** 2026-02-19 04:32:59.178795 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:32:59.178805 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-19 04:32:59.178813 | orchestrator | 2026-02-19 04:32:59.178823 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-19 04:32:59.178831 | orchestrator | Thursday 19 February 2026 04:32:47 +0000 (0:00:03.699) 0:00:20.053 ***** 2026-02-19 04:32:59.178840 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-19 04:32:59.178848 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-19 04:32:59.178856 | orchestrator | 2026-02-19 04:32:59.178865 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-19 04:32:59.178873 | orchestrator | Thursday 19 February 2026 04:32:53 +0000 (0:00:06.157) 0:00:26.211 ***** 2026-02-19 04:32:59.178882 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-19 04:32:59.178889 | orchestrator | 2026-02-19 04:32:59.178897 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:32:59.178905 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:32:59.178915 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:32:59.178923 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:32:59.178932 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:32:59.178951 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:32:59.178979 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:32:59.178990 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:32:59.178999 | orchestrator | 2026-02-19 04:32:59.179008 | orchestrator | 2026-02-19 04:32:59.179018 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:32:59.179028 | orchestrator | Thursday 19 February 2026 04:32:58 +0000 (0:00:04.863) 0:00:31.074 ***** 2026-02-19 04:32:59.179046 | orchestrator | =============================================================================== 2026-02-19 04:32:59.179055 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.16s 2026-02-19 04:32:59.179064 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.06s 2026-02-19 04:32:59.179073 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.86s 2026-02-19 04:32:59.179082 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.74s 2026-02-19 04:32:59.179092 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.70s 2026-02-19 04:32:59.179102 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.04s 2026-02-19 04:32:59.179112 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.58s 2026-02-19 04:32:59.179145 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-02-19 04:32:59.179155 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-02-19 04:33:01.507901 | orchestrator | 2026-02-19 04:33:01 | INFO  | Task 33473fcc-6d5d-46b1-8638-5b9efa5056bd (gnocchi) was prepared for execution. 2026-02-19 04:33:01.508004 | orchestrator | 2026-02-19 04:33:01 | INFO  | It takes a moment until task 33473fcc-6d5d-46b1-8638-5b9efa5056bd (gnocchi) has been started and output is visible here. 2026-02-19 04:33:06.573463 | orchestrator | 2026-02-19 04:33:06.573570 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:33:06.573587 | orchestrator | 2026-02-19 04:33:06.573600 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:33:06.573612 | orchestrator | Thursday 19 February 2026 04:33:05 +0000 (0:00:00.264) 0:00:00.264 ***** 2026-02-19 04:33:06.573623 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:33:06.573635 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:33:06.573646 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:33:06.573657 | orchestrator | 2026-02-19 04:33:06.573668 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:33:06.573679 | orchestrator | Thursday 19 February 2026 04:33:05 +0000 (0:00:00.305) 0:00:00.569 ***** 2026-02-19 04:33:06.573690 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-19 04:33:06.573702 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-19 04:33:06.573713 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-19 04:33:06.573724 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-19 04:33:06.573735 | orchestrator | 2026-02-19 04:33:06.573746 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-19 04:33:06.573757 | orchestrator | skipping: no hosts matched 2026-02-19 04:33:06.573768 | orchestrator | 2026-02-19 04:33:06.573779 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:33:06.573791 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:33:06.573804 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:33:06.573844 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:33:06.573856 | orchestrator | 2026-02-19 04:33:06.573867 | orchestrator | 2026-02-19 04:33:06.573878 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:33:06.573889 | orchestrator | Thursday 19 February 2026 04:33:06 +0000 (0:00:00.357) 0:00:00.927 ***** 2026-02-19 04:33:06.573899 | orchestrator | =============================================================================== 2026-02-19 04:33:06.573940 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-02-19 04:33:06.573952 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-02-19 04:33:08.865865 | orchestrator | 2026-02-19 04:33:08 | INFO  | Task b5741dfb-2d9d-45c4-9201-1c8fb1cbeb08 (manila) was prepared for execution. 2026-02-19 04:33:08.866186 | orchestrator | 2026-02-19 04:33:08 | INFO  | It takes a moment until task b5741dfb-2d9d-45c4-9201-1c8fb1cbeb08 (manila) has been started and output is visible here. 2026-02-19 04:33:52.216435 | orchestrator | 2026-02-19 04:33:52.216592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:33:52.216622 | orchestrator | 2026-02-19 04:33:52.216644 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:33:52.216665 | orchestrator | Thursday 19 February 2026 04:33:13 +0000 (0:00:00.290) 0:00:00.290 ***** 2026-02-19 04:33:52.216681 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:33:52.216694 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:33:52.216705 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:33:52.216716 | orchestrator | 2026-02-19 04:33:52.216727 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:33:52.216738 | orchestrator | Thursday 19 February 2026 04:33:13 +0000 (0:00:00.357) 0:00:00.648 ***** 2026-02-19 04:33:52.216749 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-19 04:33:52.216760 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-19 04:33:52.216771 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-19 04:33:52.216782 | orchestrator | 2026-02-19 04:33:52.216792 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-19 04:33:52.216803 | orchestrator | 2026-02-19 04:33:52.216814 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-19 04:33:52.216842 | orchestrator | Thursday 19 February 2026 04:33:13 +0000 (0:00:00.418) 0:00:01.067 ***** 2026-02-19 04:33:52.216854 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:33:52.216866 | orchestrator | 2026-02-19 04:33:52.216877 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-19 04:33:52.216887 | orchestrator | Thursday 19 February 2026 04:33:14 +0000 (0:00:00.574) 0:00:01.641 ***** 2026-02-19 04:33:52.216898 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:33:52.216909 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:33:52.216920 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:33:52.216931 | orchestrator | 2026-02-19 04:33:52.216941 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-19 04:33:52.216954 | orchestrator | Thursday 19 February 2026 04:33:14 +0000 (0:00:00.500) 0:00:02.142 ***** 2026-02-19 04:33:52.216967 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-19 04:33:52.216980 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-19 04:33:52.216992 | orchestrator | 2026-02-19 04:33:52.217004 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-19 04:33:52.217015 | orchestrator | Thursday 19 February 2026 04:33:21 +0000 (0:00:06.805) 0:00:08.948 ***** 2026-02-19 04:33:52.217028 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-19 04:33:52.217065 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-19 04:33:52.217078 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-19 04:33:52.217091 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-19 04:33:52.217104 | orchestrator | 2026-02-19 04:33:52.217116 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-19 04:33:52.217128 | orchestrator | Thursday 19 February 2026 04:33:35 +0000 (0:00:13.477) 0:00:22.425 ***** 2026-02-19 04:33:52.217141 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:33:52.217180 | orchestrator | 2026-02-19 04:33:52.217196 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-19 04:33:52.217213 | orchestrator | Thursday 19 February 2026 04:33:38 +0000 (0:00:03.449) 0:00:25.875 ***** 2026-02-19 04:33:52.217229 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:33:52.217240 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-19 04:33:52.217251 | orchestrator | 2026-02-19 04:33:52.217262 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-19 04:33:52.217273 | orchestrator | Thursday 19 February 2026 04:33:42 +0000 (0:00:04.000) 0:00:29.875 ***** 2026-02-19 04:33:52.217283 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:33:52.217294 | orchestrator | 2026-02-19 04:33:52.217305 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-19 04:33:52.217315 | orchestrator | Thursday 19 February 2026 04:33:46 +0000 (0:00:03.294) 0:00:33.169 ***** 2026-02-19 04:33:52.217326 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-19 04:33:52.217337 | orchestrator | 2026-02-19 04:33:52.217348 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-19 04:33:52.217358 | orchestrator | Thursday 19 February 2026 04:33:50 +0000 (0:00:04.025) 0:00:37.195 ***** 2026-02-19 04:33:52.217399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:33:52.217431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:33:52.217452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:33:52.217483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:33:52.217497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:33:52.217508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:33:52.217531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:02.837770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:02.837909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:02.837966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:02.837988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:02.838002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:02.838077 | orchestrator | 2026-02-19 04:34:02.838095 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-19 04:34:02.838107 | orchestrator | Thursday 19 February 2026 04:33:52 +0000 (0:00:02.251) 0:00:39.446 ***** 2026-02-19 04:34:02.838119 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:34:02.838130 | orchestrator | 2026-02-19 04:34:02.838141 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-19 04:34:02.838152 | orchestrator | Thursday 19 February 2026 04:33:52 +0000 (0:00:00.581) 0:00:40.027 ***** 2026-02-19 04:34:02.838214 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:34:02.838230 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:34:02.838241 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:34:02.838252 | orchestrator | 2026-02-19 04:34:02.838263 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-19 04:34:02.838277 | orchestrator | Thursday 19 February 2026 04:33:53 +0000 (0:00:00.964) 0:00:40.992 ***** 2026-02-19 04:34:02.838290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-19 04:34:02.838324 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-19 04:34:02.838351 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-19 04:34:02.838363 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-19 04:34:02.838383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-19 04:34:02.838393 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-19 04:34:02.838404 | orchestrator | 2026-02-19 04:34:02.838415 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-19 04:34:02.838426 | orchestrator | Thursday 19 February 2026 04:33:55 +0000 (0:00:01.720) 0:00:42.713 ***** 2026-02-19 04:34:02.838437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-19 04:34:02.838448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-19 04:34:02.838458 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-19 04:34:02.838469 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-19 04:34:02.838480 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-19 04:34:02.838490 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-19 04:34:02.838501 | orchestrator | 2026-02-19 04:34:02.838512 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-19 04:34:02.838523 | orchestrator | Thursday 19 February 2026 04:33:56 +0000 (0:00:01.258) 0:00:43.971 ***** 2026-02-19 04:34:02.838543 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-19 04:34:02.838572 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-19 04:34:02.838596 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-19 04:34:02.838616 | orchestrator | 2026-02-19 04:34:02.838636 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-19 04:34:02.838655 | orchestrator | Thursday 19 February 2026 04:33:57 +0000 (0:00:00.727) 0:00:44.698 ***** 2026-02-19 04:34:02.838674 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:34:02.838692 | orchestrator | 2026-02-19 04:34:02.838711 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-19 04:34:02.838730 | orchestrator | Thursday 19 February 2026 04:33:57 +0000 (0:00:00.143) 0:00:44.842 ***** 2026-02-19 04:34:02.838749 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:34:02.838767 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:34:02.838785 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:34:02.838801 | orchestrator | 2026-02-19 04:34:02.838818 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-19 04:34:02.838837 | orchestrator | Thursday 19 February 2026 04:33:58 +0000 (0:00:00.549) 0:00:45.392 ***** 2026-02-19 04:34:02.838859 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:34:02.838879 | orchestrator | 2026-02-19 04:34:02.838911 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-19 04:34:02.838932 | orchestrator | Thursday 19 February 2026 04:33:58 +0000 (0:00:00.583) 0:00:45.975 ***** 2026-02-19 04:34:02.838967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:03.728765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:03.728862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:03.728876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.728888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.728920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.728951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.728978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.728996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.729013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.729030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.729046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:03.729075 | orchestrator | 2026-02-19 04:34:03.729097 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-19 04:34:03.729117 | orchestrator | Thursday 19 February 2026 04:34:02 +0000 (0:00:04.100) 0:00:50.076 ***** 2026-02-19 04:34:03.729148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:04.351847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.351976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.352003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.352025 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:34:04.352050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:04.352105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.352125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.352207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.352231 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:34:04.352250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:04.352270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.352304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.352324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:04.352343 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:34:04.352362 | orchestrator | 2026-02-19 04:34:04.352384 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-19 04:34:04.352407 | orchestrator | Thursday 19 February 2026 04:34:03 +0000 (0:00:00.888) 0:00:50.965 ***** 2026-02-19 04:34:04.352448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:08.926641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.926766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.926796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.926846 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:34:08.926871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:08.926890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.926927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.926976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.926998 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:34:08.927039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:08.927078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.927090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.927101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:08.927112 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:34:08.927123 | orchestrator | 2026-02-19 04:34:08.927136 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-19 04:34:08.927149 | orchestrator | Thursday 19 February 2026 04:34:04 +0000 (0:00:00.879) 0:00:51.844 ***** 2026-02-19 04:34:08.927231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:15.522790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:15.522920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:15.522938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.522952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.522964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.523006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.523021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.523043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.523055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.523066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.523080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:15.523100 | orchestrator | 2026-02-19 04:34:15.523119 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-19 04:34:15.523141 | orchestrator | Thursday 19 February 2026 04:34:09 +0000 (0:00:04.506) 0:00:56.352 ***** 2026-02-19 04:34:15.523289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:19.659088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:19.659215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:34:19.659231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:19.659243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:19.659271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:19.659298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:19.659329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:19.659340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:19.659350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:19.659360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:19.659370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:34:19.659380 | orchestrator | 2026-02-19 04:34:19.659420 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-19 04:34:19.659437 | orchestrator | Thursday 19 February 2026 04:34:15 +0000 (0:00:06.404) 0:01:02.756 ***** 2026-02-19 04:34:19.659448 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-19 04:34:19.659458 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-19 04:34:19.659468 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-19 04:34:19.659484 | orchestrator | 2026-02-19 04:34:19.659494 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-19 04:34:19.659504 | orchestrator | Thursday 19 February 2026 04:34:19 +0000 (0:00:03.490) 0:01:06.247 ***** 2026-02-19 04:34:19.659523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:23.060720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.060822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.060837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.060849 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:34:23.060862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:23.060909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.060921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.060947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.060958 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:34:23.060968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-19 04:34:23.060979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.060989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.061011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 04:34:23.061022 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:34:23.061033 | orchestrator | 2026-02-19 04:34:23.061044 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-19 04:34:23.061055 | orchestrator | Thursday 19 February 2026 04:34:19 +0000 (0:00:00.663) 0:01:06.910 ***** 2026-02-19 04:34:23.061073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:35:06.206805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:35:06.206922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-19 04:35:06.206940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.206993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.207006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.207035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.207049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.207061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.207072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.207097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.207109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-19 04:35:06.207121 | orchestrator | 2026-02-19 04:35:06.207135 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-19 04:35:06.207147 | orchestrator | Thursday 19 February 2026 04:34:23 +0000 (0:00:03.392) 0:01:10.303 ***** 2026-02-19 04:35:06.207158 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:35:06.207171 | orchestrator | 2026-02-19 04:35:06.207182 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-19 04:35:06.207193 | orchestrator | Thursday 19 February 2026 04:34:25 +0000 (0:00:02.160) 0:01:12.463 ***** 2026-02-19 04:35:06.207203 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:35:06.207243 | orchestrator | 2026-02-19 04:35:06.207257 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-19 04:35:06.207268 | orchestrator | Thursday 19 February 2026 04:34:27 +0000 (0:00:02.278) 0:01:14.742 ***** 2026-02-19 04:35:06.207279 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:35:06.207290 | orchestrator | 2026-02-19 04:35:06.207300 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-19 04:35:06.207311 | orchestrator | Thursday 19 February 2026 04:35:05 +0000 (0:00:38.361) 0:01:53.103 ***** 2026-02-19 04:35:06.207322 | orchestrator | 2026-02-19 04:35:06.207354 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-19 04:35:55.836120 | orchestrator | Thursday 19 February 2026 04:35:06 +0000 (0:00:00.087) 0:01:53.191 ***** 2026-02-19 04:35:55.836237 | orchestrator | 2026-02-19 04:35:55.836312 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-19 04:35:55.836327 | orchestrator | Thursday 19 February 2026 04:35:06 +0000 (0:00:00.076) 0:01:53.268 ***** 2026-02-19 04:35:55.836338 | orchestrator | 2026-02-19 04:35:55.836349 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-19 04:35:55.836360 | orchestrator | Thursday 19 February 2026 04:35:06 +0000 (0:00:00.072) 0:01:53.340 ***** 2026-02-19 04:35:55.836375 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:35:55.836394 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:35:55.836413 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:35:55.836431 | orchestrator | 2026-02-19 04:35:55.836449 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-19 04:35:55.836468 | orchestrator | Thursday 19 February 2026 04:35:20 +0000 (0:00:14.197) 0:02:07.538 ***** 2026-02-19 04:35:55.836486 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:35:55.836504 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:35:55.836555 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:35:55.836576 | orchestrator | 2026-02-19 04:35:55.836622 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-19 04:35:55.836644 | orchestrator | Thursday 19 February 2026 04:35:31 +0000 (0:00:10.831) 0:02:18.369 ***** 2026-02-19 04:35:55.836656 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:35:55.836677 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:35:55.836695 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:35:55.836714 | orchestrator | 2026-02-19 04:35:55.836733 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-19 04:35:55.836751 | orchestrator | Thursday 19 February 2026 04:35:41 +0000 (0:00:10.339) 0:02:28.709 ***** 2026-02-19 04:35:55.836771 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:35:55.836790 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:35:55.836845 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:35:55.836865 | orchestrator | 2026-02-19 04:35:55.836879 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:35:55.836893 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 04:35:55.836906 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 04:35:55.836918 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 04:35:55.836931 | orchestrator | 2026-02-19 04:35:55.836942 | orchestrator | 2026-02-19 04:35:55.836955 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:35:55.836966 | orchestrator | Thursday 19 February 2026 04:35:55 +0000 (0:00:13.797) 0:02:42.507 ***** 2026-02-19 04:35:55.836979 | orchestrator | =============================================================================== 2026-02-19 04:35:55.836990 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 38.36s 2026-02-19 04:35:55.837003 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.20s 2026-02-19 04:35:55.837016 | orchestrator | manila : Restart manila-share container -------------------------------- 13.80s 2026-02-19 04:35:55.837043 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.48s 2026-02-19 04:35:55.837054 | orchestrator | manila : Restart manila-data container --------------------------------- 10.83s 2026-02-19 04:35:55.837065 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.34s 2026-02-19 04:35:55.837075 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.81s 2026-02-19 04:35:55.837086 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.40s 2026-02-19 04:35:55.837097 | orchestrator | manila : Copying over config.json files for services -------------------- 4.51s 2026-02-19 04:35:55.837107 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.10s 2026-02-19 04:35:55.837118 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 4.03s 2026-02-19 04:35:55.837128 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.00s 2026-02-19 04:35:55.837221 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.49s 2026-02-19 04:35:55.837233 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.45s 2026-02-19 04:35:55.837244 | orchestrator | manila : Check manila containers ---------------------------------------- 3.39s 2026-02-19 04:35:55.837290 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.29s 2026-02-19 04:35:55.837302 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.28s 2026-02-19 04:35:55.837313 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.25s 2026-02-19 04:35:55.837324 | orchestrator | manila : Creating Manila database --------------------------------------- 2.16s 2026-02-19 04:35:55.837349 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.72s 2026-02-19 04:35:56.178638 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-19 04:36:08.319473 | orchestrator | 2026-02-19 04:36:08 | INFO  | Task 4aa283c9-2b7f-416f-982c-adfe9e8580ee (netdata) was prepared for execution. 2026-02-19 04:36:08.319614 | orchestrator | 2026-02-19 04:36:08 | INFO  | It takes a moment until task 4aa283c9-2b7f-416f-982c-adfe9e8580ee (netdata) has been started and output is visible here. 2026-02-19 04:37:43.650069 | orchestrator | 2026-02-19 04:37:43.650170 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:37:43.650182 | orchestrator | 2026-02-19 04:37:43.650191 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:37:43.650199 | orchestrator | Thursday 19 February 2026 04:36:12 +0000 (0:00:00.233) 0:00:00.233 ***** 2026-02-19 04:37:43.650207 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-19 04:37:43.650215 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-19 04:37:43.650222 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-19 04:37:43.650230 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-19 04:37:43.650238 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-19 04:37:43.650245 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-19 04:37:43.650252 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-19 04:37:43.650260 | orchestrator | 2026-02-19 04:37:43.650267 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-19 04:37:43.650274 | orchestrator | 2026-02-19 04:37:43.650281 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-19 04:37:43.650289 | orchestrator | Thursday 19 February 2026 04:36:13 +0000 (0:00:00.941) 0:00:01.174 ***** 2026-02-19 04:37:43.650298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:37:43.650307 | orchestrator | 2026-02-19 04:37:43.650356 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-19 04:37:43.650364 | orchestrator | Thursday 19 February 2026 04:36:15 +0000 (0:00:01.389) 0:00:02.564 ***** 2026-02-19 04:37:43.650372 | orchestrator | ok: [testbed-manager] 2026-02-19 04:37:43.650381 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:37:43.650389 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:37:43.650396 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:37:43.650404 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:37:43.650411 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:37:43.650418 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:37:43.650425 | orchestrator | 2026-02-19 04:37:43.650433 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-19 04:37:43.650440 | orchestrator | Thursday 19 February 2026 04:36:16 +0000 (0:00:01.933) 0:00:04.497 ***** 2026-02-19 04:37:43.650448 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:37:43.650455 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:37:43.650462 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:37:43.650469 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:37:43.650476 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:37:43.650484 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:37:43.650491 | orchestrator | ok: [testbed-manager] 2026-02-19 04:37:43.650498 | orchestrator | 2026-02-19 04:37:43.650506 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-19 04:37:43.650513 | orchestrator | Thursday 19 February 2026 04:36:19 +0000 (0:00:02.176) 0:00:06.674 ***** 2026-02-19 04:37:43.650521 | orchestrator | changed: [testbed-manager] 2026-02-19 04:37:43.650528 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:37:43.650556 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:37:43.650564 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:37:43.650571 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:37:43.650580 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:37:43.650588 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:37:43.650596 | orchestrator | 2026-02-19 04:37:43.650616 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-19 04:37:43.650625 | orchestrator | Thursday 19 February 2026 04:36:20 +0000 (0:00:01.553) 0:00:08.227 ***** 2026-02-19 04:37:43.650634 | orchestrator | changed: [testbed-manager] 2026-02-19 04:37:43.650642 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:37:43.650650 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:37:43.650659 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:37:43.650667 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:37:43.650676 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:37:43.650684 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:37:43.650691 | orchestrator | 2026-02-19 04:37:43.650699 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-19 04:37:43.650706 | orchestrator | Thursday 19 February 2026 04:36:36 +0000 (0:00:15.349) 0:00:23.577 ***** 2026-02-19 04:37:43.650713 | orchestrator | changed: [testbed-manager] 2026-02-19 04:37:43.650721 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:37:43.650728 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:37:43.650735 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:37:43.650742 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:37:43.650750 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:37:43.650757 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:37:43.650764 | orchestrator | 2026-02-19 04:37:43.650771 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-19 04:37:43.650779 | orchestrator | Thursday 19 February 2026 04:37:18 +0000 (0:00:42.427) 0:01:06.004 ***** 2026-02-19 04:37:43.650787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:37:43.650796 | orchestrator | 2026-02-19 04:37:43.650803 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-19 04:37:43.650811 | orchestrator | Thursday 19 February 2026 04:37:20 +0000 (0:00:01.510) 0:01:07.515 ***** 2026-02-19 04:37:43.650818 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-19 04:37:43.650826 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-19 04:37:43.650833 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-19 04:37:43.650841 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-19 04:37:43.650862 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-19 04:37:43.650870 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-19 04:37:43.650877 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-19 04:37:43.650884 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-19 04:37:43.650892 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-19 04:37:43.650899 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-19 04:37:43.650906 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-19 04:37:43.650913 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-19 04:37:43.650920 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-19 04:37:43.650927 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-19 04:37:43.650934 | orchestrator | 2026-02-19 04:37:43.650941 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-19 04:37:43.650950 | orchestrator | Thursday 19 February 2026 04:37:23 +0000 (0:00:03.552) 0:01:11.067 ***** 2026-02-19 04:37:43.650957 | orchestrator | ok: [testbed-manager] 2026-02-19 04:37:43.650971 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:37:43.650978 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:37:43.650985 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:37:43.650993 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:37:43.651000 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:37:43.651007 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:37:43.651014 | orchestrator | 2026-02-19 04:37:43.651021 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-19 04:37:43.651028 | orchestrator | Thursday 19 February 2026 04:37:24 +0000 (0:00:01.347) 0:01:12.415 ***** 2026-02-19 04:37:43.651036 | orchestrator | changed: [testbed-manager] 2026-02-19 04:37:43.651043 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:37:43.651050 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:37:43.651058 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:37:43.651065 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:37:43.651072 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:37:43.651079 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:37:43.651086 | orchestrator | 2026-02-19 04:37:43.651094 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-19 04:37:43.651101 | orchestrator | Thursday 19 February 2026 04:37:26 +0000 (0:00:01.317) 0:01:13.732 ***** 2026-02-19 04:37:43.651108 | orchestrator | ok: [testbed-manager] 2026-02-19 04:37:43.651116 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:37:43.651123 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:37:43.651130 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:37:43.651137 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:37:43.651144 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:37:43.651151 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:37:43.651158 | orchestrator | 2026-02-19 04:37:43.651166 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-19 04:37:43.651173 | orchestrator | Thursday 19 February 2026 04:37:27 +0000 (0:00:01.229) 0:01:14.962 ***** 2026-02-19 04:37:43.651180 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:37:43.651188 | orchestrator | ok: [testbed-manager] 2026-02-19 04:37:43.651195 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:37:43.651202 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:37:43.651209 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:37:43.651216 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:37:43.651223 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:37:43.651230 | orchestrator | 2026-02-19 04:37:43.651237 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-19 04:37:43.651245 | orchestrator | Thursday 19 February 2026 04:37:29 +0000 (0:00:01.589) 0:01:16.552 ***** 2026-02-19 04:37:43.651256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-19 04:37:43.651266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:37:43.651274 | orchestrator | 2026-02-19 04:37:43.651281 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-19 04:37:43.651288 | orchestrator | Thursday 19 February 2026 04:37:30 +0000 (0:00:01.408) 0:01:17.960 ***** 2026-02-19 04:37:43.651295 | orchestrator | changed: [testbed-manager] 2026-02-19 04:37:43.651302 | orchestrator | 2026-02-19 04:37:43.651340 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-19 04:37:43.651348 | orchestrator | Thursday 19 February 2026 04:37:32 +0000 (0:00:02.083) 0:01:20.044 ***** 2026-02-19 04:37:43.651355 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:37:43.651363 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:37:43.651370 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:37:43.651377 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:37:43.651384 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:37:43.651391 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:37:43.651399 | orchestrator | changed: [testbed-manager] 2026-02-19 04:37:43.651412 | orchestrator | 2026-02-19 04:37:43.651420 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:37:43.651427 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:37:43.651436 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:37:43.651443 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:37:43.651451 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:37:43.651463 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:37:44.088372 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:37:44.088465 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:37:44.088478 | orchestrator | 2026-02-19 04:37:44.088488 | orchestrator | 2026-02-19 04:37:44.088498 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:37:44.088508 | orchestrator | Thursday 19 February 2026 04:37:43 +0000 (0:00:11.096) 0:01:31.141 ***** 2026-02-19 04:37:44.088517 | orchestrator | =============================================================================== 2026-02-19 04:37:44.088526 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.43s 2026-02-19 04:37:44.088535 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.35s 2026-02-19 04:37:44.088544 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.10s 2026-02-19 04:37:44.088552 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.55s 2026-02-19 04:37:44.088561 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.18s 2026-02-19 04:37:44.088570 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.08s 2026-02-19 04:37:44.088579 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.93s 2026-02-19 04:37:44.088587 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.59s 2026-02-19 04:37:44.088609 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.55s 2026-02-19 04:37:44.088618 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.51s 2026-02-19 04:37:44.088627 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.41s 2026-02-19 04:37:44.088635 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.39s 2026-02-19 04:37:44.088654 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.35s 2026-02-19 04:37:44.088664 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.32s 2026-02-19 04:37:44.088673 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.23s 2026-02-19 04:37:44.088681 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-02-19 04:37:46.624171 | orchestrator | 2026-02-19 04:37:46 | INFO  | Task 930ae9b7-13aa-4b85-9ccf-a0134bf03487 (prometheus) was prepared for execution. 2026-02-19 04:37:46.624242 | orchestrator | 2026-02-19 04:37:46 | INFO  | It takes a moment until task 930ae9b7-13aa-4b85-9ccf-a0134bf03487 (prometheus) has been started and output is visible here. 2026-02-19 04:37:55.790522 | orchestrator | 2026-02-19 04:37:55.790626 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:37:55.790655 | orchestrator | 2026-02-19 04:37:55.790660 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:37:55.790680 | orchestrator | Thursday 19 February 2026 04:37:50 +0000 (0:00:00.270) 0:00:00.270 ***** 2026-02-19 04:37:55.790684 | orchestrator | ok: [testbed-manager] 2026-02-19 04:37:55.790690 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:37:55.790694 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:37:55.790697 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:37:55.790701 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:37:55.790705 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:37:55.790709 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:37:55.790713 | orchestrator | 2026-02-19 04:37:55.790717 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:37:55.790720 | orchestrator | Thursday 19 February 2026 04:37:51 +0000 (0:00:00.850) 0:00:01.120 ***** 2026-02-19 04:37:55.790727 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-19 04:37:55.790733 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-19 04:37:55.790739 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-19 04:37:55.790746 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-19 04:37:55.790752 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-19 04:37:55.790757 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-19 04:37:55.790763 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-19 04:37:55.790769 | orchestrator | 2026-02-19 04:37:55.790775 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-19 04:37:55.790781 | orchestrator | 2026-02-19 04:37:55.790787 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-19 04:37:55.790793 | orchestrator | Thursday 19 February 2026 04:37:52 +0000 (0:00:00.885) 0:00:02.006 ***** 2026-02-19 04:37:55.790802 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:37:55.790808 | orchestrator | 2026-02-19 04:37:55.790811 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-19 04:37:55.790815 | orchestrator | Thursday 19 February 2026 04:37:53 +0000 (0:00:01.349) 0:00:03.356 ***** 2026-02-19 04:37:55.790822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:37:55.790830 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-19 04:37:55.790836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:37:55.790847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:37:55.790868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:37:55.790873 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:37:55.790877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:37:55.790881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:37:55.790885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:37:55.790891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:37:55.790896 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:37:55.790910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:37:56.771173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:37:56.771182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:37:56.771197 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-19 04:37:56.771225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:37:56.771249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771286 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:37:56.771297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:37:56.771345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:01.724530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:01.724661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:01.724674 | orchestrator | 2026-02-19 04:38:01.724686 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-19 04:38:01.724698 | orchestrator | Thursday 19 February 2026 04:37:56 +0000 (0:00:02.781) 0:00:06.137 ***** 2026-02-19 04:38:01.724708 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 04:38:01.724721 | orchestrator | 2026-02-19 04:38:01.724730 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-19 04:38:01.724739 | orchestrator | Thursday 19 February 2026 04:37:58 +0000 (0:00:01.706) 0:00:07.844 ***** 2026-02-19 04:38:01.724750 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-19 04:38:01.724791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:01.724801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:01.724827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:01.724855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:01.724865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:01.724875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:01.724884 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:01.724902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:01.724911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:01.724920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:01.724936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:01.724952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791359 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:03.791410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:03.791418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:03.791450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-19 04:38:03.791499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:03.791521 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:03.791528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:03.791544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:04.926526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:04.926669 | orchestrator | 2026-02-19 04:38:04.926682 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-19 04:38:04.926693 | orchestrator | Thursday 19 February 2026 04:38:03 +0000 (0:00:05.311) 0:00:13.156 ***** 2026-02-19 04:38:04.926704 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-19 04:38:04.926714 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:04.926723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:04.926781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-19 04:38:04.926811 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:04.926819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:04.926835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:04.926843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:04.926851 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:38:04.926861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:04.926869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:04.926876 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:38:04.926889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:04.926897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:04.926911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:05.518389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:05.518556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:05.518575 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:38:05.518588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:05.518598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:05.518608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:05.518637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:05.518646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:05.518679 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:38:05.518708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:05.518718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:05.518727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 04:38:05.518738 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:38:05.518749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:05.518760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:05.518778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 04:38:05.518789 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:38:05.518800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:05.518829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:06.553442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 04:38:06.553576 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:38:06.553590 | orchestrator | 2026-02-19 04:38:06.553599 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-19 04:38:06.553607 | orchestrator | Thursday 19 February 2026 04:38:05 +0000 (0:00:01.730) 0:00:14.886 ***** 2026-02-19 04:38:06.553616 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-19 04:38:06.553626 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:06.553634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:06.553664 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-19 04:38:06.553718 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:06.553726 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:38:06.553733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:06.553740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:06.553746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:06.553753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:06.553760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:06.553776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:06.553783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:06.553795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:07.771729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:07.771812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:07.771823 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:38:07.771833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:07.771841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:07.771848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:07.771888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:07.771896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 04:38:07.771903 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:38:07.771910 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:38:07.771930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:07.771937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:07.771944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 04:38:07.771950 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:38:07.771957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:07.771968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:07.771978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 04:38:07.771985 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:38:07.771991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 04:38:07.772002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 04:38:11.194391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 04:38:11.194478 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:38:11.194490 | orchestrator | 2026-02-19 04:38:11.194499 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-19 04:38:11.194509 | orchestrator | Thursday 19 February 2026 04:38:07 +0000 (0:00:02.242) 0:00:17.129 ***** 2026-02-19 04:38:11.194518 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-19 04:38:11.194528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:11.194557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:11.194578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:11.194586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:11.194607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:11.194616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:11.194625 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:38:11.194638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:11.194662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:11.194677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:11.194696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:11.194708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:11.194730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:13.879857 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:13.879959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:13.879997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:13.880008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:38:13.880029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:38:13.880038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:13.880046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:38:13.880073 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-19 04:38:13.880085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:13.880099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:13.880107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:38:13.880119 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:13.880128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:13.880137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:13.880152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:38:18.321574 | orchestrator | 2026-02-19 04:38:18.321672 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-19 04:38:18.321683 | orchestrator | Thursday 19 February 2026 04:38:13 +0000 (0:00:06.112) 0:00:23.242 ***** 2026-02-19 04:38:18.321709 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 04:38:18.321717 | orchestrator | 2026-02-19 04:38:18.321724 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-19 04:38:18.321730 | orchestrator | Thursday 19 February 2026 04:38:14 +0000 (0:00:00.891) 0:00:24.133 ***** 2026-02-19 04:38:18.321738 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319874, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321748 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319874, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321754 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319874, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321773 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319890, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6324916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321785 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319874, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:18.321796 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319890, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6324916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321823 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319874, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321844 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319890, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6324916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321855 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319870, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.624815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321867 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319874, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321883 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319870, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.624815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321895 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319890, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6324916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321905 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1319874, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:18.321932 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319870, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.624815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.170770 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319885, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6307025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.170879 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319885, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6307025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.170895 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319890, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6324916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.170925 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319885, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6307025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.170938 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319870, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.624815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.170949 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319890, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6324916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.170984 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6236594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.171015 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1319890, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6324916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:20.171027 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6236594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.171039 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319870, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.624815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.171055 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319885, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6307025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.171067 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319877, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6276445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.171078 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6236594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.171097 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319870, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.624815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:20.171116 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319877, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6276445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918311 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319883, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6303666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918427 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319877, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6276445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918455 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6236594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918464 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319885, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6307025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918490 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319885, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6307025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918498 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319883, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6303666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918505 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1319870, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.624815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:21.918528 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319877, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6276445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918536 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319883, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6303666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918547 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6236594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918555 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319879, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6279688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918569 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6236594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918577 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319872, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918584 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319883, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6303666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:21.918597 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319877, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6276445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400260 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319879, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6279688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400421 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319879, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6279688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400442 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.632137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400477 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319879, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6279688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400490 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319872, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400502 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319883, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6303666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400514 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1319885, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6307025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:23.400544 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319877, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6276445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400562 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319872, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400574 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319865, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6224375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400594 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319872, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400606 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.632137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400617 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319879, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6279688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400628 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319865, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6224375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:23.400648 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319883, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6303666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.648979 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.632137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649133 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319902, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6358151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649158 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.632137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649178 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319902, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6358151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649195 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319872, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649213 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319865, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6224375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649231 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1319868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6236594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:24.649284 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319879, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6279688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649320 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319887, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.63166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649474 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319865, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6224375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649491 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.632137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649503 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319887, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.63166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649515 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319902, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6358151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649526 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319872, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:24.649557 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319902, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6358151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353530 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319869, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6239116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353664 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319865, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6224375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353689 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.632137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353708 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319869, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6239116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353727 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319887, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.63166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353745 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319866, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6232758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353813 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319887, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.63166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353856 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319865, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6224375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353874 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319902, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6358151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353890 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1319877, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6276445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:26.353906 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319882, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6300392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353922 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319866, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6232758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353939 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319869, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6239116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.353988 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319869, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6239116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:26.354138 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319902, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6358151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344222 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319880, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6294646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344293 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319887, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.63166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344300 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319882, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6300392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344305 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319866, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6232758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344310 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319880, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6294646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344390 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319869, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6239116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344398 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1319883, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6303666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:27.344413 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319866, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6232758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344419 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319900, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6355891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344423 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319887, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.63166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344429 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:38:27.344435 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319900, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6355891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344440 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:38:27.344449 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319882, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6300392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344457 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319866, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6232758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:27.344465 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319869, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6239116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188636 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319880, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6294646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188738 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319882, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6300392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188751 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319900, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6355891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188761 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:38:35.188773 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319882, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6300392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188804 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319866, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6232758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188827 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319880, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6294646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188837 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319880, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6294646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188862 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1319879, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6279688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:35.188872 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319882, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6300392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188881 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319900, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6355891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188890 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:38:35.188900 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319900, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6355891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188916 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:38:35.188925 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319880, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6294646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188938 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319900, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6355891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-19 04:38:35.188948 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:38:35.188963 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1319872, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6263242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.413958 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.632137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414075 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319865, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6224375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414085 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1319902, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6358151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414108 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1319887, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.63166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414114 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1319869, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6239116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414130 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1319866, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6232758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414136 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1319882, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6300392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414153 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1319880, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6294646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414159 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1319900, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.6355891, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-19 04:38:41.414165 | orchestrator | 2026-02-19 04:38:41.414171 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-19 04:38:41.414178 | orchestrator | Thursday 19 February 2026 04:38:38 +0000 (0:00:24.182) 0:00:48.315 ***** 2026-02-19 04:38:41.414187 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 04:38:41.414194 | orchestrator | 2026-02-19 04:38:41.414199 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-19 04:38:41.414204 | orchestrator | Thursday 19 February 2026 04:38:39 +0000 (0:00:00.710) 0:00:49.026 ***** 2026-02-19 04:38:41.414209 | orchestrator | [WARNING]: Skipped 2026-02-19 04:38:41.414214 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414220 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-19 04:38:41.414225 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414230 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-19 04:38:41.414235 | orchestrator | [WARNING]: Skipped 2026-02-19 04:38:41.414240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414245 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-19 04:38:41.414249 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414254 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-19 04:38:41.414259 | orchestrator | [WARNING]: Skipped 2026-02-19 04:38:41.414264 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414268 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-19 04:38:41.414273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414278 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-19 04:38:41.414283 | orchestrator | [WARNING]: Skipped 2026-02-19 04:38:41.414287 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414292 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-19 04:38:41.414297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414302 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-19 04:38:41.414306 | orchestrator | [WARNING]: Skipped 2026-02-19 04:38:41.414311 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414316 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-19 04:38:41.414321 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414325 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-19 04:38:41.414330 | orchestrator | [WARNING]: Skipped 2026-02-19 04:38:41.414377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414386 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-19 04:38:41.414391 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414395 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-19 04:38:41.414400 | orchestrator | [WARNING]: Skipped 2026-02-19 04:38:41.414405 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414410 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-19 04:38:41.414414 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-19 04:38:41.414419 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-19 04:38:41.414424 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:38:41.414429 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-19 04:38:41.414434 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 04:38:41.414438 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-19 04:38:41.414443 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-19 04:38:41.414448 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-19 04:38:41.414453 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-19 04:38:41.414462 | orchestrator | 2026-02-19 04:38:41.414470 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-19 04:39:11.258491 | orchestrator | Thursday 19 February 2026 04:38:41 +0000 (0:00:01.753) 0:00:50.779 ***** 2026-02-19 04:39:11.258589 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-19 04:39:11.258603 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-19 04:39:11.258613 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:39:11.258623 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:39:11.258631 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-19 04:39:11.258639 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:39:11.258648 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-19 04:39:11.258655 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:11.258663 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-19 04:39:11.258671 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:11.258679 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-19 04:39:11.258687 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:11.258695 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-19 04:39:11.258703 | orchestrator | 2026-02-19 04:39:11.258711 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-19 04:39:11.258720 | orchestrator | Thursday 19 February 2026 04:38:57 +0000 (0:00:15.950) 0:01:06.730 ***** 2026-02-19 04:39:11.258728 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-19 04:39:11.258736 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-19 04:39:11.258743 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:39:11.258751 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:39:11.258759 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-19 04:39:11.258767 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:39:11.258775 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-19 04:39:11.258782 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:11.258790 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-19 04:39:11.258798 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:11.258806 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-19 04:39:11.258813 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:11.258821 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-19 04:39:11.258829 | orchestrator | 2026-02-19 04:39:11.258837 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-19 04:39:11.258845 | orchestrator | Thursday 19 February 2026 04:39:00 +0000 (0:00:02.696) 0:01:09.427 ***** 2026-02-19 04:39:11.258853 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-19 04:39:11.258863 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:39:11.258871 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-19 04:39:11.258879 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:39:11.258887 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-19 04:39:11.258916 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:39:11.258925 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-19 04:39:11.258933 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:11.258941 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-19 04:39:11.258962 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-19 04:39:11.258970 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:11.258978 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-19 04:39:11.258986 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:11.258994 | orchestrator | 2026-02-19 04:39:11.259002 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-19 04:39:11.259010 | orchestrator | Thursday 19 February 2026 04:39:01 +0000 (0:00:01.816) 0:01:11.243 ***** 2026-02-19 04:39:11.259018 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 04:39:11.259026 | orchestrator | 2026-02-19 04:39:11.259033 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-19 04:39:11.259042 | orchestrator | Thursday 19 February 2026 04:39:02 +0000 (0:00:00.729) 0:01:11.973 ***** 2026-02-19 04:39:11.259050 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:39:11.259058 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:39:11.259066 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:39:11.259073 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:39:11.259097 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:11.259106 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:11.259113 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:11.259121 | orchestrator | 2026-02-19 04:39:11.259129 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-19 04:39:11.259137 | orchestrator | Thursday 19 February 2026 04:39:03 +0000 (0:00:00.749) 0:01:12.722 ***** 2026-02-19 04:39:11.259145 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:39:11.259152 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:11.259160 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:11.259168 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:11.259176 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:39:11.259183 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:39:11.259191 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:39:11.259199 | orchestrator | 2026-02-19 04:39:11.259206 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-19 04:39:11.259214 | orchestrator | Thursday 19 February 2026 04:39:05 +0000 (0:00:02.259) 0:01:14.982 ***** 2026-02-19 04:39:11.259222 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-19 04:39:11.259230 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-19 04:39:11.259238 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:39:11.259246 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-19 04:39:11.259253 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-19 04:39:11.259261 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-19 04:39:11.259269 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:39:11.259277 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:39:11.259284 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:39:11.259292 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:11.259300 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-19 04:39:11.259308 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:11.259322 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-19 04:39:11.259330 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:11.259338 | orchestrator | 2026-02-19 04:39:11.259370 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-19 04:39:11.259378 | orchestrator | Thursday 19 February 2026 04:39:07 +0000 (0:00:01.560) 0:01:16.542 ***** 2026-02-19 04:39:11.259386 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-19 04:39:11.259394 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:39:11.259402 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-19 04:39:11.259410 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:39:11.259417 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-19 04:39:11.259425 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:39:11.259433 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-19 04:39:11.259441 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:11.259448 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-19 04:39:11.259456 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:11.259464 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-19 04:39:11.259472 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:11.259479 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-19 04:39:11.259487 | orchestrator | 2026-02-19 04:39:11.259495 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-19 04:39:11.259503 | orchestrator | Thursday 19 February 2026 04:39:08 +0000 (0:00:01.420) 0:01:17.962 ***** 2026-02-19 04:39:11.259511 | orchestrator | [WARNING]: Skipped 2026-02-19 04:39:11.259520 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-19 04:39:11.259527 | orchestrator | due to this access issue: 2026-02-19 04:39:11.259540 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-19 04:39:11.259548 | orchestrator | not a directory 2026-02-19 04:39:11.259555 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 04:39:11.259563 | orchestrator | 2026-02-19 04:39:11.259571 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-19 04:39:11.259579 | orchestrator | Thursday 19 February 2026 04:39:09 +0000 (0:00:01.116) 0:01:19.078 ***** 2026-02-19 04:39:11.259586 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:39:11.259594 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:39:11.259602 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:39:11.259610 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:39:11.259617 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:11.259625 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:11.259633 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:11.259640 | orchestrator | 2026-02-19 04:39:11.259648 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-19 04:39:11.259656 | orchestrator | Thursday 19 February 2026 04:39:10 +0000 (0:00:01.023) 0:01:20.101 ***** 2026-02-19 04:39:11.259664 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:39:11.259671 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:39:11.259679 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:39:11.259691 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:39:14.141201 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:39:14.141311 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:39:14.141327 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:39:14.141421 | orchestrator | 2026-02-19 04:39:14.141453 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-19 04:39:14.141478 | orchestrator | Thursday 19 February 2026 04:39:11 +0000 (0:00:00.997) 0:01:21.099 ***** 2026-02-19 04:39:14.141501 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-19 04:39:14.141525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:39:14.141545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:39:14.141564 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:39:14.141582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:39:14.141622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:39:14.141664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:39:14.141706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-19 04:39:14.141729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:14.141750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:14.141771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:39:14.141792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:39:14.141821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:39:14.141843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:39:14.141889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:18.046932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:18.047032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:18.047049 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-19 04:39:18.047063 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:39:18.047095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:39:18.047107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-19 04:39:18.047142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:18.047172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:39:18.047186 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:18.047197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:39:18.047209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-19 04:39:18.047221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:18.047238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:18.047257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 04:39:18.047270 | orchestrator | 2026-02-19 04:39:18.047283 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-19 04:39:18.047295 | orchestrator | Thursday 19 February 2026 04:39:16 +0000 (0:00:04.326) 0:01:25.426 ***** 2026-02-19 04:39:18.047307 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-19 04:39:18.047318 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:39:18.047330 | orchestrator | 2026-02-19 04:39:18.047386 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-19 04:41:08.898676 | orchestrator | Thursday 19 February 2026 04:39:17 +0000 (0:00:01.267) 0:01:26.693 ***** 2026-02-19 04:41:08.898770 | orchestrator | 2026-02-19 04:41:08.898782 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-19 04:41:08.898790 | orchestrator | Thursday 19 February 2026 04:39:17 +0000 (0:00:00.254) 0:01:26.948 ***** 2026-02-19 04:41:08.898797 | orchestrator | 2026-02-19 04:41:08.898805 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-19 04:41:08.898812 | orchestrator | Thursday 19 February 2026 04:39:17 +0000 (0:00:00.073) 0:01:27.021 ***** 2026-02-19 04:41:08.898819 | orchestrator | 2026-02-19 04:41:08.898827 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-19 04:41:08.898834 | orchestrator | Thursday 19 February 2026 04:39:17 +0000 (0:00:00.075) 0:01:27.097 ***** 2026-02-19 04:41:08.898841 | orchestrator | 2026-02-19 04:41:08.898848 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-19 04:41:08.898855 | orchestrator | Thursday 19 February 2026 04:39:17 +0000 (0:00:00.066) 0:01:27.164 ***** 2026-02-19 04:41:08.898862 | orchestrator | 2026-02-19 04:41:08.898870 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-19 04:41:08.898877 | orchestrator | Thursday 19 February 2026 04:39:17 +0000 (0:00:00.068) 0:01:27.232 ***** 2026-02-19 04:41:08.898884 | orchestrator | 2026-02-19 04:41:08.898891 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-19 04:41:08.898898 | orchestrator | Thursday 19 February 2026 04:39:17 +0000 (0:00:00.065) 0:01:27.297 ***** 2026-02-19 04:41:08.898905 | orchestrator | 2026-02-19 04:41:08.898912 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-19 04:41:08.898919 | orchestrator | Thursday 19 February 2026 04:39:18 +0000 (0:00:00.108) 0:01:27.406 ***** 2026-02-19 04:41:08.898927 | orchestrator | changed: [testbed-manager] 2026-02-19 04:41:08.898935 | orchestrator | 2026-02-19 04:41:08.898943 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-19 04:41:08.898950 | orchestrator | Thursday 19 February 2026 04:39:40 +0000 (0:00:22.027) 0:01:49.433 ***** 2026-02-19 04:41:08.898957 | orchestrator | changed: [testbed-manager] 2026-02-19 04:41:08.898964 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:41:08.898971 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:41:08.898979 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:41:08.898986 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:41:08.898993 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:41:08.899000 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:41:08.899007 | orchestrator | 2026-02-19 04:41:08.899036 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-19 04:41:08.899043 | orchestrator | Thursday 19 February 2026 04:39:53 +0000 (0:00:13.272) 0:02:02.706 ***** 2026-02-19 04:41:08.899051 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:41:08.899058 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:41:08.899065 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:41:08.899072 | orchestrator | 2026-02-19 04:41:08.899079 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-19 04:41:08.899087 | orchestrator | Thursday 19 February 2026 04:40:03 +0000 (0:00:10.177) 0:02:12.883 ***** 2026-02-19 04:41:08.899094 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:41:08.899101 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:41:08.899108 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:41:08.899115 | orchestrator | 2026-02-19 04:41:08.899122 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-19 04:41:08.899130 | orchestrator | Thursday 19 February 2026 04:40:14 +0000 (0:00:10.557) 0:02:23.440 ***** 2026-02-19 04:41:08.899137 | orchestrator | changed: [testbed-manager] 2026-02-19 04:41:08.899144 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:41:08.899151 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:41:08.899158 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:41:08.899165 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:41:08.899172 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:41:08.899179 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:41:08.899186 | orchestrator | 2026-02-19 04:41:08.899193 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-19 04:41:08.899200 | orchestrator | Thursday 19 February 2026 04:40:28 +0000 (0:00:14.233) 0:02:37.674 ***** 2026-02-19 04:41:08.899219 | orchestrator | changed: [testbed-manager] 2026-02-19 04:41:08.899226 | orchestrator | 2026-02-19 04:41:08.899235 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-19 04:41:08.899244 | orchestrator | Thursday 19 February 2026 04:40:36 +0000 (0:00:08.496) 0:02:46.170 ***** 2026-02-19 04:41:08.899252 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:41:08.899261 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:41:08.899269 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:41:08.899277 | orchestrator | 2026-02-19 04:41:08.899285 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-19 04:41:08.899294 | orchestrator | Thursday 19 February 2026 04:40:47 +0000 (0:00:10.545) 0:02:56.716 ***** 2026-02-19 04:41:08.899303 | orchestrator | changed: [testbed-manager] 2026-02-19 04:41:08.899311 | orchestrator | 2026-02-19 04:41:08.899319 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-19 04:41:08.899328 | orchestrator | Thursday 19 February 2026 04:40:57 +0000 (0:00:10.497) 0:03:07.213 ***** 2026-02-19 04:41:08.899336 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:41:08.899344 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:41:08.899352 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:41:08.899360 | orchestrator | 2026-02-19 04:41:08.899369 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:41:08.899379 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-19 04:41:08.899405 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-19 04:41:08.899426 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-19 04:41:08.899435 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-19 04:41:08.899442 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-19 04:41:08.899456 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-19 04:41:08.899463 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-19 04:41:08.899470 | orchestrator | 2026-02-19 04:41:08.899478 | orchestrator | 2026-02-19 04:41:08.899485 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:41:08.899492 | orchestrator | Thursday 19 February 2026 04:41:08 +0000 (0:00:10.503) 0:03:17.717 ***** 2026-02-19 04:41:08.899500 | orchestrator | =============================================================================== 2026-02-19 04:41:08.899507 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.18s 2026-02-19 04:41:08.899514 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.03s 2026-02-19 04:41:08.899521 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.95s 2026-02-19 04:41:08.899528 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.23s 2026-02-19 04:41:08.899535 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.27s 2026-02-19 04:41:08.899542 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.56s 2026-02-19 04:41:08.899550 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.55s 2026-02-19 04:41:08.899557 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.50s 2026-02-19 04:41:08.899564 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.50s 2026-02-19 04:41:08.899571 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.18s 2026-02-19 04:41:08.899578 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.50s 2026-02-19 04:41:08.899585 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.11s 2026-02-19 04:41:08.899592 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.31s 2026-02-19 04:41:08.899599 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.33s 2026-02-19 04:41:08.899607 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.78s 2026-02-19 04:41:08.899614 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.70s 2026-02-19 04:41:08.899621 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.26s 2026-02-19 04:41:08.899628 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.24s 2026-02-19 04:41:08.899635 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.82s 2026-02-19 04:41:08.899642 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.75s 2026-02-19 04:41:12.198563 | orchestrator | 2026-02-19 04:41:12 | INFO  | Task 49b81e4f-6165-43fb-ad26-cd52d0a9604c (grafana) was prepared for execution. 2026-02-19 04:41:12.198687 | orchestrator | 2026-02-19 04:41:12 | INFO  | It takes a moment until task 49b81e4f-6165-43fb-ad26-cd52d0a9604c (grafana) has been started and output is visible here. 2026-02-19 04:41:22.197596 | orchestrator | 2026-02-19 04:41:22.197688 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:41:22.197704 | orchestrator | 2026-02-19 04:41:22.197718 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:41:22.197734 | orchestrator | Thursday 19 February 2026 04:41:16 +0000 (0:00:00.263) 0:00:00.263 ***** 2026-02-19 04:41:22.197750 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:41:22.197762 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:41:22.197774 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:41:22.197785 | orchestrator | 2026-02-19 04:41:22.197802 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:41:22.197840 | orchestrator | Thursday 19 February 2026 04:41:16 +0000 (0:00:00.323) 0:00:00.587 ***** 2026-02-19 04:41:22.197853 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-19 04:41:22.197866 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-19 04:41:22.197878 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-19 04:41:22.197889 | orchestrator | 2026-02-19 04:41:22.197902 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-19 04:41:22.197915 | orchestrator | 2026-02-19 04:41:22.197927 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-19 04:41:22.197934 | orchestrator | Thursday 19 February 2026 04:41:17 +0000 (0:00:00.475) 0:00:01.063 ***** 2026-02-19 04:41:22.197941 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:41:22.197948 | orchestrator | 2026-02-19 04:41:22.197955 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-19 04:41:22.197962 | orchestrator | Thursday 19 February 2026 04:41:17 +0000 (0:00:00.575) 0:00:01.638 ***** 2026-02-19 04:41:22.197972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:22.197994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:22.198001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:22.198008 | orchestrator | 2026-02-19 04:41:22.198059 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-19 04:41:22.198067 | orchestrator | Thursday 19 February 2026 04:41:18 +0000 (0:00:00.901) 0:00:02.540 ***** 2026-02-19 04:41:22.198073 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-19 04:41:22.198081 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-19 04:41:22.198088 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:41:22.198094 | orchestrator | 2026-02-19 04:41:22.198101 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-19 04:41:22.198107 | orchestrator | Thursday 19 February 2026 04:41:19 +0000 (0:00:00.852) 0:00:03.392 ***** 2026-02-19 04:41:22.198122 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:41:22.198129 | orchestrator | 2026-02-19 04:41:22.198135 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-19 04:41:22.198153 | orchestrator | Thursday 19 February 2026 04:41:20 +0000 (0:00:00.585) 0:00:03.978 ***** 2026-02-19 04:41:22.198178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:22.198186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:22.198194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:22.198202 | orchestrator | 2026-02-19 04:41:22.198210 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-19 04:41:22.198217 | orchestrator | Thursday 19 February 2026 04:41:21 +0000 (0:00:01.364) 0:00:05.342 ***** 2026-02-19 04:41:22.198225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 04:41:22.198234 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:41:22.198242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 04:41:22.198256 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:41:22.198276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 04:41:29.071116 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:41:29.071233 | orchestrator | 2026-02-19 04:41:29.071250 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-19 04:41:29.071262 | orchestrator | Thursday 19 February 2026 04:41:22 +0000 (0:00:00.549) 0:00:05.892 ***** 2026-02-19 04:41:29.071275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 04:41:29.071288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 04:41:29.071299 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:41:29.071309 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:41:29.071320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-19 04:41:29.071329 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:41:29.071339 | orchestrator | 2026-02-19 04:41:29.071349 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-19 04:41:29.071359 | orchestrator | Thursday 19 February 2026 04:41:22 +0000 (0:00:00.609) 0:00:06.501 ***** 2026-02-19 04:41:29.071369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:29.071455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:29.071502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:29.071522 | orchestrator | 2026-02-19 04:41:29.071541 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-19 04:41:29.071552 | orchestrator | Thursday 19 February 2026 04:41:24 +0000 (0:00:01.286) 0:00:07.788 ***** 2026-02-19 04:41:29.071562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:29.071573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:29.071583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:41:29.071602 | orchestrator | 2026-02-19 04:41:29.071611 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-19 04:41:29.071621 | orchestrator | Thursday 19 February 2026 04:41:25 +0000 (0:00:01.681) 0:00:09.469 ***** 2026-02-19 04:41:29.071631 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:41:29.071640 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:41:29.071650 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:41:29.071659 | orchestrator | 2026-02-19 04:41:29.071669 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-19 04:41:29.071678 | orchestrator | Thursday 19 February 2026 04:41:26 +0000 (0:00:00.314) 0:00:09.783 ***** 2026-02-19 04:41:29.071688 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-19 04:41:29.071698 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-19 04:41:29.071708 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-19 04:41:29.071717 | orchestrator | 2026-02-19 04:41:29.071727 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-19 04:41:29.071736 | orchestrator | Thursday 19 February 2026 04:41:27 +0000 (0:00:01.300) 0:00:11.084 ***** 2026-02-19 04:41:29.071746 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-19 04:41:29.071762 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-19 04:41:29.071772 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-19 04:41:29.071782 | orchestrator | 2026-02-19 04:41:29.071791 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-19 04:41:29.071809 | orchestrator | Thursday 19 February 2026 04:41:29 +0000 (0:00:01.671) 0:00:12.756 ***** 2026-02-19 04:41:35.539517 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:41:35.539642 | orchestrator | 2026-02-19 04:41:35.539666 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-19 04:41:35.539685 | orchestrator | Thursday 19 February 2026 04:41:29 +0000 (0:00:00.728) 0:00:13.484 ***** 2026-02-19 04:41:35.539701 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-19 04:41:35.539718 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-19 04:41:35.539734 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:41:35.539752 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:41:35.539768 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:41:35.539783 | orchestrator | 2026-02-19 04:41:35.539799 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-19 04:41:35.539815 | orchestrator | Thursday 19 February 2026 04:41:30 +0000 (0:00:00.730) 0:00:14.215 ***** 2026-02-19 04:41:35.539831 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:41:35.539848 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:41:35.539865 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:41:35.539880 | orchestrator | 2026-02-19 04:41:35.539897 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-19 04:41:35.539913 | orchestrator | Thursday 19 February 2026 04:41:30 +0000 (0:00:00.340) 0:00:14.556 ***** 2026-02-19 04:41:35.539934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318535, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.255678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.539987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318535, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.255678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1318535, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.255678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1318837, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3215904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1318837, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3215904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1318837, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3215904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318550, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2584076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318550, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2584076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1318550, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2584076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1318842, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3252068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1318842, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3252068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:35.540240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1318842, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3252068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318572, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2624037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318572, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2624037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1318572, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2624037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1318630, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.279383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1318630, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.279383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1318630, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.279383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318533, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.254314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318533, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.254314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1318533, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.254314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318544, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.255809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318544, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.255809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.419991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1318544, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.255809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:39.420011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318554, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2588232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.636804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318554, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2588232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.636913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1318554, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2588232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.636938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1318579, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2638085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.636960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1318579, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2638085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1318579, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2638085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1318636, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3209634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1318636, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3209634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1318636, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3209634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318546, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2573638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318546, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2573638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1318546, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2573638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1318591, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.276972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:43.637252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1318591, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.276972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.415940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1318591, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.276972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1318574, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2635095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1318574, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2635095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1318574, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2635095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318568, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2622616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318568, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2622616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1318568, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2622616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318559, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2611048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318559, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2611048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1318559, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2611048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1318582, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2648962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1318582, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2648962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:47.416432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318557, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2596405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1318582, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2648962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318557, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2596405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1318633, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.279383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1318557, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.2596405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1318633, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.279383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319388, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4664857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319388, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4664857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1318633, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.279383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318895, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3366613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318895, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3366613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1319388, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4664857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318878, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3287191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:51.276234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318878, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3287191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.418699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318895, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3366613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.418827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1318921, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.339775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.418857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1318921, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.339775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.418899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318878, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3287191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.418950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318864, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3263204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.418974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318864, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3263204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.419017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1318921, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.339775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.419035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1318953, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3484714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.419046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1318953, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3484714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.419058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318864, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3263204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.419084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1318925, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3460972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.419096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1318925, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3460972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:55.419117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1318953, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3484714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1318956, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.349338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1318956, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.349338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1318925, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3460972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319383, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4625375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319383, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4625375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1318956, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.349338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1318950, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3479517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1318950, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3479517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1319383, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4625375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1318912, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3384683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1318912, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3384683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1318950, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3479517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:41:59.171813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318894, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3318102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318894, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3318102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1318912, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3384683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318908, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3375793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318908, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3375793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318894, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3318102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3309367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3309367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318908, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3375793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1318918, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.339405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1318918, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.339405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318889, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3309367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319327, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4619546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:03.498655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319327, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4619546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.194876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1318918, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.339405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319324, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4378119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319324, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4378119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1319327, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4619546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3266408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3266408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318873, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3278468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318873, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3278468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1319324, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.4378119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1318948, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.346972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318868, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3266408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1318948, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.346972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:42:07.195200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1318959, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3498507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:43:56.019606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1318959, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3498507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:43:56.019719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318873, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3278468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:43:56.019739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1318948, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.346972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:43:56.019751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1318959, 'dev': 144, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771468882.3498507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-19 04:43:56.019761 | orchestrator | 2026-02-19 04:43:56.019772 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-19 04:43:56.019782 | orchestrator | Thursday 19 February 2026 04:42:09 +0000 (0:00:38.628) 0:00:53.185 ***** 2026-02-19 04:43:56.019791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:43:56.019842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:43:56.019853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-19 04:43:56.019863 | orchestrator | 2026-02-19 04:43:56.019873 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-19 04:43:56.019882 | orchestrator | Thursday 19 February 2026 04:42:10 +0000 (0:00:01.003) 0:00:54.188 ***** 2026-02-19 04:43:56.019891 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:43:56.019902 | orchestrator | 2026-02-19 04:43:56.019917 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-19 04:43:56.019927 | orchestrator | Thursday 19 February 2026 04:42:12 +0000 (0:00:02.478) 0:00:56.667 ***** 2026-02-19 04:43:56.019936 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:43:56.019945 | orchestrator | 2026-02-19 04:43:56.019954 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-19 04:43:56.019963 | orchestrator | Thursday 19 February 2026 04:42:15 +0000 (0:00:02.489) 0:00:59.156 ***** 2026-02-19 04:43:56.019972 | orchestrator | 2026-02-19 04:43:56.019981 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-19 04:43:56.019990 | orchestrator | Thursday 19 February 2026 04:42:15 +0000 (0:00:00.082) 0:00:59.239 ***** 2026-02-19 04:43:56.020000 | orchestrator | 2026-02-19 04:43:56.020009 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-19 04:43:56.020019 | orchestrator | Thursday 19 February 2026 04:42:15 +0000 (0:00:00.072) 0:00:59.311 ***** 2026-02-19 04:43:56.020028 | orchestrator | 2026-02-19 04:43:56.020038 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-19 04:43:56.020047 | orchestrator | Thursday 19 February 2026 04:42:15 +0000 (0:00:00.078) 0:00:59.390 ***** 2026-02-19 04:43:56.020053 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:43:56.020059 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:43:56.020065 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:43:56.020071 | orchestrator | 2026-02-19 04:43:56.020077 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-19 04:43:56.020087 | orchestrator | Thursday 19 February 2026 04:42:22 +0000 (0:00:07.248) 0:01:06.639 ***** 2026-02-19 04:43:56.020097 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:43:56.020105 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:43:56.020115 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-19 04:43:56.020136 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-19 04:43:56.020145 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-19 04:43:56.020156 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-19 04:43:56.020167 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:43:56.020177 | orchestrator | 2026-02-19 04:43:56.020187 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-19 04:43:56.020197 | orchestrator | Thursday 19 February 2026 04:43:14 +0000 (0:00:51.392) 0:01:58.031 ***** 2026-02-19 04:43:56.020207 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:43:56.020219 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:43:56.020229 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:43:56.020238 | orchestrator | 2026-02-19 04:43:56.020249 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-19 04:43:56.020260 | orchestrator | Thursday 19 February 2026 04:43:50 +0000 (0:00:36.244) 0:02:34.275 ***** 2026-02-19 04:43:56.020270 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:43:56.020280 | orchestrator | 2026-02-19 04:43:56.020289 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-19 04:43:56.020299 | orchestrator | Thursday 19 February 2026 04:43:52 +0000 (0:00:02.251) 0:02:36.527 ***** 2026-02-19 04:43:56.020308 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:43:56.020318 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:43:56.020328 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:43:56.020338 | orchestrator | 2026-02-19 04:43:56.020348 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-19 04:43:56.020357 | orchestrator | Thursday 19 February 2026 04:43:53 +0000 (0:00:00.335) 0:02:36.862 ***** 2026-02-19 04:43:56.020368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-19 04:43:56.020390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-19 04:43:56.654912 | orchestrator | 2026-02-19 04:43:56.655012 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-19 04:43:56.655029 | orchestrator | Thursday 19 February 2026 04:43:55 +0000 (0:00:02.832) 0:02:39.695 ***** 2026-02-19 04:43:56.655041 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:43:56.655054 | orchestrator | 2026-02-19 04:43:56.655065 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:43:56.655077 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 04:43:56.655090 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 04:43:56.655101 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 04:43:56.655112 | orchestrator | 2026-02-19 04:43:56.655123 | orchestrator | 2026-02-19 04:43:56.655134 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:43:56.655165 | orchestrator | Thursday 19 February 2026 04:43:56 +0000 (0:00:00.300) 0:02:39.995 ***** 2026-02-19 04:43:56.655176 | orchestrator | =============================================================================== 2026-02-19 04:43:56.655209 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.39s 2026-02-19 04:43:56.655220 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.63s 2026-02-19 04:43:56.655230 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.24s 2026-02-19 04:43:56.655242 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.25s 2026-02-19 04:43:56.655252 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.83s 2026-02-19 04:43:56.655263 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.49s 2026-02-19 04:43:56.655274 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.48s 2026-02-19 04:43:56.655285 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.25s 2026-02-19 04:43:56.655295 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.68s 2026-02-19 04:43:56.655306 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.67s 2026-02-19 04:43:56.655317 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.36s 2026-02-19 04:43:56.655327 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2026-02-19 04:43:56.655338 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2026-02-19 04:43:56.655349 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.00s 2026-02-19 04:43:56.655359 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.90s 2026-02-19 04:43:56.655370 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2026-02-19 04:43:56.655381 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2026-02-19 04:43:56.655391 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.73s 2026-02-19 04:43:56.655402 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.61s 2026-02-19 04:43:56.655413 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.59s 2026-02-19 04:43:56.957300 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-19 04:43:56.964424 | orchestrator | + set -e 2026-02-19 04:43:56.964560 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 04:43:56.964587 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 04:43:56.964602 | orchestrator | ++ INTERACTIVE=false 2026-02-19 04:43:56.964613 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 04:43:56.964623 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 04:43:56.964634 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 04:43:56.964645 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 04:43:56.964655 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 04:43:56.964666 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 04:43:56.964676 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 04:43:56.964687 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 04:43:56.964699 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 04:43:56.964709 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 04:43:56.964721 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 04:43:56.964732 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 04:43:56.964743 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 04:43:56.964753 | orchestrator | ++ export ARA=false 2026-02-19 04:43:56.964764 | orchestrator | ++ ARA=false 2026-02-19 04:43:56.964775 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 04:43:56.964785 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 04:43:56.964796 | orchestrator | ++ export TEMPEST=false 2026-02-19 04:43:56.964807 | orchestrator | ++ TEMPEST=false 2026-02-19 04:43:56.964817 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 04:43:56.964828 | orchestrator | ++ IS_ZUUL=true 2026-02-19 04:43:56.964838 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 04:43:56.964849 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 04:43:56.964860 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 04:43:56.964870 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 04:43:56.964881 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 04:43:56.964891 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 04:43:56.964902 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 04:43:56.964943 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 04:43:56.964954 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 04:43:56.964965 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 04:43:56.965588 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-19 04:43:57.034677 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 04:43:57.034754 | orchestrator | + osism apply clusterapi 2026-02-19 04:43:59.433738 | orchestrator | 2026-02-19 04:43:59 | INFO  | Task d890457f-d072-4d5b-8445-ab2daea0e4d9 (clusterapi) was prepared for execution. 2026-02-19 04:43:59.433867 | orchestrator | 2026-02-19 04:43:59 | INFO  | It takes a moment until task d890457f-d072-4d5b-8445-ab2daea0e4d9 (clusterapi) has been started and output is visible here. 2026-02-19 04:45:01.781426 | orchestrator | 2026-02-19 04:45:01.781549 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-19 04:45:01.781564 | orchestrator | 2026-02-19 04:45:01.781575 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-19 04:45:01.781587 | orchestrator | Thursday 19 February 2026 04:44:03 +0000 (0:00:00.189) 0:00:00.189 ***** 2026-02-19 04:45:01.781599 | orchestrator | included: cert_manager for testbed-manager 2026-02-19 04:45:01.781610 | orchestrator | 2026-02-19 04:45:01.781621 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-19 04:45:01.781632 | orchestrator | Thursday 19 February 2026 04:44:03 +0000 (0:00:00.238) 0:00:00.427 ***** 2026-02-19 04:45:01.781639 | orchestrator | changed: [testbed-manager] 2026-02-19 04:45:01.781647 | orchestrator | 2026-02-19 04:45:01.781658 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-19 04:45:01.781668 | orchestrator | Thursday 19 February 2026 04:44:09 +0000 (0:00:05.646) 0:00:06.074 ***** 2026-02-19 04:45:01.781678 | orchestrator | changed: [testbed-manager] 2026-02-19 04:45:01.781688 | orchestrator | 2026-02-19 04:45:01.781699 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-19 04:45:01.781710 | orchestrator | 2026-02-19 04:45:01.781720 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-19 04:45:01.781732 | orchestrator | Thursday 19 February 2026 04:44:41 +0000 (0:00:31.616) 0:00:37.690 ***** 2026-02-19 04:45:01.781739 | orchestrator | ok: [testbed-manager] 2026-02-19 04:45:01.781745 | orchestrator | 2026-02-19 04:45:01.781767 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-19 04:45:01.781774 | orchestrator | Thursday 19 February 2026 04:44:42 +0000 (0:00:01.107) 0:00:38.798 ***** 2026-02-19 04:45:01.781780 | orchestrator | ok: [testbed-manager] 2026-02-19 04:45:01.781787 | orchestrator | 2026-02-19 04:45:01.781793 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-19 04:45:01.781800 | orchestrator | Thursday 19 February 2026 04:44:42 +0000 (0:00:00.162) 0:00:38.960 ***** 2026-02-19 04:45:01.781806 | orchestrator | ok: [testbed-manager] 2026-02-19 04:45:01.781812 | orchestrator | 2026-02-19 04:45:01.781818 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-19 04:45:01.781824 | orchestrator | Thursday 19 February 2026 04:44:58 +0000 (0:00:16.398) 0:00:55.358 ***** 2026-02-19 04:45:01.781830 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:45:01.781837 | orchestrator | 2026-02-19 04:45:01.781843 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-19 04:45:01.781849 | orchestrator | Thursday 19 February 2026 04:44:59 +0000 (0:00:00.155) 0:00:55.514 ***** 2026-02-19 04:45:01.781855 | orchestrator | changed: [testbed-manager] 2026-02-19 04:45:01.781861 | orchestrator | 2026-02-19 04:45:01.781868 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:45:01.781875 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 04:45:01.781882 | orchestrator | 2026-02-19 04:45:01.781889 | orchestrator | 2026-02-19 04:45:01.781895 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:45:01.781901 | orchestrator | Thursday 19 February 2026 04:45:01 +0000 (0:00:02.350) 0:00:57.864 ***** 2026-02-19 04:45:01.781927 | orchestrator | =============================================================================== 2026-02-19 04:45:01.781934 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 31.62s 2026-02-19 04:45:01.781940 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.40s 2026-02-19 04:45:01.781946 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.65s 2026-02-19 04:45:01.781952 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.35s 2026-02-19 04:45:01.781958 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.11s 2026-02-19 04:45:01.781964 | orchestrator | Include cert_manager role ----------------------------------------------- 0.24s 2026-02-19 04:45:01.781970 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-02-19 04:45:01.781976 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.16s 2026-02-19 04:45:02.117428 | orchestrator | + osism apply magnum 2026-02-19 04:45:04.204102 | orchestrator | 2026-02-19 04:45:04 | INFO  | Task cf162763-a69f-4fc1-a36a-ed3cd19eab5f (magnum) was prepared for execution. 2026-02-19 04:45:04.204200 | orchestrator | 2026-02-19 04:45:04 | INFO  | It takes a moment until task cf162763-a69f-4fc1-a36a-ed3cd19eab5f (magnum) has been started and output is visible here. 2026-02-19 04:45:49.482691 | orchestrator | 2026-02-19 04:45:49.482795 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:45:49.482806 | orchestrator | 2026-02-19 04:45:49.482814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:45:49.482822 | orchestrator | Thursday 19 February 2026 04:45:08 +0000 (0:00:00.261) 0:00:00.261 ***** 2026-02-19 04:45:49.482828 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:45:49.482836 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:45:49.482842 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:45:49.482848 | orchestrator | 2026-02-19 04:45:49.482855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:45:49.482862 | orchestrator | Thursday 19 February 2026 04:45:08 +0000 (0:00:00.307) 0:00:00.569 ***** 2026-02-19 04:45:49.482869 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-19 04:45:49.482876 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-19 04:45:49.482882 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-19 04:45:49.482888 | orchestrator | 2026-02-19 04:45:49.482894 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-19 04:45:49.482901 | orchestrator | 2026-02-19 04:45:49.482907 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-19 04:45:49.482913 | orchestrator | Thursday 19 February 2026 04:45:09 +0000 (0:00:00.441) 0:00:01.010 ***** 2026-02-19 04:45:49.482919 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:45:49.482927 | orchestrator | 2026-02-19 04:45:49.482934 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-19 04:45:49.482941 | orchestrator | Thursday 19 February 2026 04:45:09 +0000 (0:00:00.593) 0:00:01.604 ***** 2026-02-19 04:45:49.482948 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-19 04:45:49.482955 | orchestrator | 2026-02-19 04:45:49.482962 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-19 04:45:49.482969 | orchestrator | Thursday 19 February 2026 04:45:13 +0000 (0:00:03.887) 0:00:05.492 ***** 2026-02-19 04:45:49.482977 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-19 04:45:49.482984 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-19 04:45:49.482991 | orchestrator | 2026-02-19 04:45:49.482998 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-19 04:45:49.483005 | orchestrator | Thursday 19 February 2026 04:45:20 +0000 (0:00:07.101) 0:00:12.594 ***** 2026-02-19 04:45:49.483046 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-19 04:45:49.483053 | orchestrator | 2026-02-19 04:45:49.483059 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-19 04:45:49.483065 | orchestrator | Thursday 19 February 2026 04:45:24 +0000 (0:00:03.627) 0:00:16.221 ***** 2026-02-19 04:45:49.483071 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-19 04:45:49.483077 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-19 04:45:49.483084 | orchestrator | 2026-02-19 04:45:49.483089 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-19 04:45:49.483096 | orchestrator | Thursday 19 February 2026 04:45:28 +0000 (0:00:04.170) 0:00:20.392 ***** 2026-02-19 04:45:49.483102 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-19 04:45:49.483108 | orchestrator | 2026-02-19 04:45:49.483114 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-19 04:45:49.483120 | orchestrator | Thursday 19 February 2026 04:45:32 +0000 (0:00:03.500) 0:00:23.893 ***** 2026-02-19 04:45:49.483125 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-19 04:45:49.483132 | orchestrator | 2026-02-19 04:45:49.483138 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-19 04:45:49.483145 | orchestrator | Thursday 19 February 2026 04:45:36 +0000 (0:00:04.201) 0:00:28.095 ***** 2026-02-19 04:45:49.483152 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:45:49.483158 | orchestrator | 2026-02-19 04:45:49.483165 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-19 04:45:49.483172 | orchestrator | Thursday 19 February 2026 04:45:39 +0000 (0:00:03.613) 0:00:31.708 ***** 2026-02-19 04:45:49.483178 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:45:49.483184 | orchestrator | 2026-02-19 04:45:49.483190 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-19 04:45:49.483196 | orchestrator | Thursday 19 February 2026 04:45:44 +0000 (0:00:04.196) 0:00:35.904 ***** 2026-02-19 04:45:49.483202 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:45:49.483208 | orchestrator | 2026-02-19 04:45:49.483213 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-19 04:45:49.483217 | orchestrator | Thursday 19 February 2026 04:45:47 +0000 (0:00:03.769) 0:00:39.673 ***** 2026-02-19 04:45:49.483238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:49.483245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:49.483261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:49.483266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:49.483271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:49.483279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:56.787841 | orchestrator | 2026-02-19 04:45:56.787954 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-19 04:45:56.787970 | orchestrator | Thursday 19 February 2026 04:45:49 +0000 (0:00:01.577) 0:00:41.251 ***** 2026-02-19 04:45:56.787982 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:45:56.787995 | orchestrator | 2026-02-19 04:45:56.788006 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-19 04:45:56.788017 | orchestrator | Thursday 19 February 2026 04:45:49 +0000 (0:00:00.152) 0:00:41.403 ***** 2026-02-19 04:45:56.788054 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:45:56.788074 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:45:56.788093 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:45:56.788111 | orchestrator | 2026-02-19 04:45:56.788130 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-19 04:45:56.788149 | orchestrator | Thursday 19 February 2026 04:45:49 +0000 (0:00:00.302) 0:00:41.705 ***** 2026-02-19 04:45:56.788167 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 04:45:56.788188 | orchestrator | 2026-02-19 04:45:56.788207 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-19 04:45:56.788228 | orchestrator | Thursday 19 February 2026 04:45:50 +0000 (0:00:00.857) 0:00:42.563 ***** 2026-02-19 04:45:56.788265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:56.788287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:56.788307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:56.788351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:56.788388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:56.788414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:56.788427 | orchestrator | 2026-02-19 04:45:56.788440 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-19 04:45:56.788454 | orchestrator | Thursday 19 February 2026 04:45:53 +0000 (0:00:02.385) 0:00:44.949 ***** 2026-02-19 04:45:56.788466 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:45:56.788511 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:45:56.788533 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:45:56.788553 | orchestrator | 2026-02-19 04:45:56.788566 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-19 04:45:56.788579 | orchestrator | Thursday 19 February 2026 04:45:53 +0000 (0:00:00.484) 0:00:45.433 ***** 2026-02-19 04:45:56.788597 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:45:56.788616 | orchestrator | 2026-02-19 04:45:56.788635 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-19 04:45:56.788653 | orchestrator | Thursday 19 February 2026 04:45:54 +0000 (0:00:00.566) 0:00:46.000 ***** 2026-02-19 04:45:56.788670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:56.788704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:57.729753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:45:57.729871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:57.729888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:57.729900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:45:57.729912 | orchestrator | 2026-02-19 04:45:57.729928 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-19 04:45:57.729973 | orchestrator | Thursday 19 February 2026 04:45:56 +0000 (0:00:02.557) 0:00:48.558 ***** 2026-02-19 04:45:57.730074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:45:57.730092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:45:57.730104 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:45:57.730124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:45:57.730136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:45:57.730147 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:45:57.730158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:45:57.730187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:46:01.234097 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:46:01.234199 | orchestrator | 2026-02-19 04:46:01.234216 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-19 04:46:01.234229 | orchestrator | Thursday 19 February 2026 04:45:57 +0000 (0:00:00.942) 0:00:49.500 ***** 2026-02-19 04:46:01.234243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:46:01.234276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:46:01.234289 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:46:01.234301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:46:01.234338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:46:01.234350 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:46:01.234381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:46:01.234394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:46:01.234405 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:46:01.234416 | orchestrator | 2026-02-19 04:46:01.234432 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-19 04:46:01.234444 | orchestrator | Thursday 19 February 2026 04:45:58 +0000 (0:00:00.890) 0:00:50.391 ***** 2026-02-19 04:46:01.234456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:01.234475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:01.234528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:07.329336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:46:07.329448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:46:07.329460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:46:07.329542 | orchestrator | 2026-02-19 04:46:07.329553 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-19 04:46:07.329561 | orchestrator | Thursday 19 February 2026 04:46:01 +0000 (0:00:02.619) 0:00:53.010 ***** 2026-02-19 04:46:07.329568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:07.329589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:07.329596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:07.329607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:46:07.329620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:46:07.329627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:46:07.329633 | orchestrator | 2026-02-19 04:46:07.329640 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-19 04:46:07.329651 | orchestrator | Thursday 19 February 2026 04:46:06 +0000 (0:00:05.423) 0:00:58.434 ***** 2026-02-19 04:46:07.329670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:46:09.159242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:46:09.159338 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:46:09.159380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:46:09.159416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:46:09.159426 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:46:09.159436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-19 04:46:09.159462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 04:46:09.159472 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:46:09.159554 | orchestrator | 2026-02-19 04:46:09.159566 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-19 04:46:09.159577 | orchestrator | Thursday 19 February 2026 04:46:07 +0000 (0:00:00.674) 0:00:59.108 ***** 2026-02-19 04:46:09.159593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:09.159610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:09.159620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-19 04:46:09.159629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:46:09.159646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:47:07.722460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-19 04:47:07.722742 | orchestrator | 2026-02-19 04:47:07.722769 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-19 04:47:07.722782 | orchestrator | Thursday 19 February 2026 04:46:09 +0000 (0:00:01.821) 0:01:00.930 ***** 2026-02-19 04:47:07.722794 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:47:07.722805 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:47:07.722816 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:47:07.722845 | orchestrator | 2026-02-19 04:47:07.722856 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-19 04:47:07.722867 | orchestrator | Thursday 19 February 2026 04:46:09 +0000 (0:00:00.537) 0:01:01.468 ***** 2026-02-19 04:47:07.722878 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:47:07.722889 | orchestrator | 2026-02-19 04:47:07.722900 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-19 04:47:07.722911 | orchestrator | Thursday 19 February 2026 04:46:12 +0000 (0:00:02.347) 0:01:03.815 ***** 2026-02-19 04:47:07.722921 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:47:07.722932 | orchestrator | 2026-02-19 04:47:07.722954 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-19 04:47:07.722968 | orchestrator | Thursday 19 February 2026 04:46:14 +0000 (0:00:02.472) 0:01:06.288 ***** 2026-02-19 04:47:07.722980 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:47:07.722992 | orchestrator | 2026-02-19 04:47:07.723005 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-19 04:47:07.723017 | orchestrator | Thursday 19 February 2026 04:46:32 +0000 (0:00:17.669) 0:01:23.957 ***** 2026-02-19 04:47:07.723030 | orchestrator | 2026-02-19 04:47:07.723043 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-19 04:47:07.723056 | orchestrator | Thursday 19 February 2026 04:46:32 +0000 (0:00:00.071) 0:01:24.028 ***** 2026-02-19 04:47:07.723068 | orchestrator | 2026-02-19 04:47:07.723079 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-19 04:47:07.723090 | orchestrator | Thursday 19 February 2026 04:46:32 +0000 (0:00:00.070) 0:01:24.099 ***** 2026-02-19 04:47:07.723100 | orchestrator | 2026-02-19 04:47:07.723111 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-19 04:47:07.723122 | orchestrator | Thursday 19 February 2026 04:46:32 +0000 (0:00:00.071) 0:01:24.171 ***** 2026-02-19 04:47:07.723132 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:47:07.723143 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:47:07.723154 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:47:07.723165 | orchestrator | 2026-02-19 04:47:07.723176 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-19 04:47:07.723186 | orchestrator | Thursday 19 February 2026 04:46:51 +0000 (0:00:18.973) 0:01:43.144 ***** 2026-02-19 04:47:07.723197 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:47:07.723208 | orchestrator | changed: [testbed-node-2] 2026-02-19 04:47:07.723219 | orchestrator | changed: [testbed-node-1] 2026-02-19 04:47:07.723229 | orchestrator | 2026-02-19 04:47:07.723240 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:47:07.723252 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 04:47:07.723264 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 04:47:07.723275 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-19 04:47:07.723295 | orchestrator | 2026-02-19 04:47:07.723306 | orchestrator | 2026-02-19 04:47:07.723317 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:47:07.723328 | orchestrator | Thursday 19 February 2026 04:47:07 +0000 (0:00:15.980) 0:01:59.124 ***** 2026-02-19 04:47:07.723339 | orchestrator | =============================================================================== 2026-02-19 04:47:07.723349 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.97s 2026-02-19 04:47:07.723360 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.67s 2026-02-19 04:47:07.723372 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.98s 2026-02-19 04:47:07.723382 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.10s 2026-02-19 04:47:07.723393 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.42s 2026-02-19 04:47:07.723404 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.20s 2026-02-19 04:47:07.723415 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.20s 2026-02-19 04:47:07.723445 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.17s 2026-02-19 04:47:07.723457 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.89s 2026-02-19 04:47:07.723467 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.77s 2026-02-19 04:47:07.723478 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.63s 2026-02-19 04:47:07.723489 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.61s 2026-02-19 04:47:07.723524 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.50s 2026-02-19 04:47:07.723545 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.62s 2026-02-19 04:47:07.723556 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.56s 2026-02-19 04:47:07.723567 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.47s 2026-02-19 04:47:07.723578 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.39s 2026-02-19 04:47:07.723588 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.35s 2026-02-19 04:47:07.723599 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.82s 2026-02-19 04:47:07.723610 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.58s 2026-02-19 04:47:08.407284 | orchestrator | ok: Runtime: 1:42:40.119748 2026-02-19 04:47:08.632986 | 2026-02-19 04:47:08.633178 | TASK [Deploy in a nutshell] 2026-02-19 04:47:09.169353 | orchestrator | skipping: Conditional result was False 2026-02-19 04:47:09.192881 | 2026-02-19 04:47:09.193073 | TASK [Bootstrap services] 2026-02-19 04:47:09.925177 | orchestrator | 2026-02-19 04:47:09.925408 | orchestrator | # BOOTSTRAP 2026-02-19 04:47:09.925449 | orchestrator | 2026-02-19 04:47:09.925474 | orchestrator | + set -e 2026-02-19 04:47:09.925496 | orchestrator | + echo 2026-02-19 04:47:09.925573 | orchestrator | + echo '# BOOTSTRAP' 2026-02-19 04:47:09.925593 | orchestrator | + echo 2026-02-19 04:47:09.925638 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-19 04:47:09.934761 | orchestrator | + set -e 2026-02-19 04:47:09.934958 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-19 04:47:12.067754 | orchestrator | 2026-02-19 04:47:12 | INFO  | It takes a moment until task 263a65b7-b4e1-4c13-b9ff-8c5241acc6ce (flavor-manager) has been started and output is visible here. 2026-02-19 04:47:19.708224 | orchestrator | 2026-02-19 04:47:15 | INFO  | Flavor SCS-1L-1 created 2026-02-19 04:47:19.708372 | orchestrator | 2026-02-19 04:47:15 | INFO  | Flavor SCS-1L-1-5 created 2026-02-19 04:47:19.709195 | orchestrator | 2026-02-19 04:47:15 | INFO  | Flavor SCS-1V-2 created 2026-02-19 04:47:19.709216 | orchestrator | 2026-02-19 04:47:15 | INFO  | Flavor SCS-1V-2-5 created 2026-02-19 04:47:19.709229 | orchestrator | 2026-02-19 04:47:15 | INFO  | Flavor SCS-1V-4 created 2026-02-19 04:47:19.709241 | orchestrator | 2026-02-19 04:47:16 | INFO  | Flavor SCS-1V-4-10 created 2026-02-19 04:47:19.709253 | orchestrator | 2026-02-19 04:47:16 | INFO  | Flavor SCS-1V-8 created 2026-02-19 04:47:19.709266 | orchestrator | 2026-02-19 04:47:16 | INFO  | Flavor SCS-1V-8-20 created 2026-02-19 04:47:19.709304 | orchestrator | 2026-02-19 04:47:16 | INFO  | Flavor SCS-2V-4 created 2026-02-19 04:47:19.709314 | orchestrator | 2026-02-19 04:47:16 | INFO  | Flavor SCS-2V-4-10 created 2026-02-19 04:47:19.709324 | orchestrator | 2026-02-19 04:47:16 | INFO  | Flavor SCS-2V-8 created 2026-02-19 04:47:19.709334 | orchestrator | 2026-02-19 04:47:16 | INFO  | Flavor SCS-2V-8-20 created 2026-02-19 04:47:19.709344 | orchestrator | 2026-02-19 04:47:17 | INFO  | Flavor SCS-2V-16 created 2026-02-19 04:47:19.709354 | orchestrator | 2026-02-19 04:47:17 | INFO  | Flavor SCS-2V-16-50 created 2026-02-19 04:47:19.709364 | orchestrator | 2026-02-19 04:47:17 | INFO  | Flavor SCS-4V-8 created 2026-02-19 04:47:19.709374 | orchestrator | 2026-02-19 04:47:17 | INFO  | Flavor SCS-4V-8-20 created 2026-02-19 04:47:19.709384 | orchestrator | 2026-02-19 04:47:17 | INFO  | Flavor SCS-4V-16 created 2026-02-19 04:47:19.709393 | orchestrator | 2026-02-19 04:47:17 | INFO  | Flavor SCS-4V-16-50 created 2026-02-19 04:47:19.709403 | orchestrator | 2026-02-19 04:47:17 | INFO  | Flavor SCS-4V-32 created 2026-02-19 04:47:19.709413 | orchestrator | 2026-02-19 04:47:18 | INFO  | Flavor SCS-4V-32-100 created 2026-02-19 04:47:19.709422 | orchestrator | 2026-02-19 04:47:18 | INFO  | Flavor SCS-8V-16 created 2026-02-19 04:47:19.709432 | orchestrator | 2026-02-19 04:47:18 | INFO  | Flavor SCS-8V-16-50 created 2026-02-19 04:47:19.709442 | orchestrator | 2026-02-19 04:47:18 | INFO  | Flavor SCS-8V-32 created 2026-02-19 04:47:19.709452 | orchestrator | 2026-02-19 04:47:18 | INFO  | Flavor SCS-8V-32-100 created 2026-02-19 04:47:19.709461 | orchestrator | 2026-02-19 04:47:18 | INFO  | Flavor SCS-16V-32 created 2026-02-19 04:47:19.709471 | orchestrator | 2026-02-19 04:47:19 | INFO  | Flavor SCS-16V-32-100 created 2026-02-19 04:47:19.709481 | orchestrator | 2026-02-19 04:47:19 | INFO  | Flavor SCS-2V-4-20s created 2026-02-19 04:47:19.709490 | orchestrator | 2026-02-19 04:47:19 | INFO  | Flavor SCS-4V-8-50s created 2026-02-19 04:47:19.709500 | orchestrator | 2026-02-19 04:47:19 | INFO  | Flavor SCS-8V-32-100s created 2026-02-19 04:47:21.955867 | orchestrator | 2026-02-19 04:47:21 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-19 04:47:32.035290 | orchestrator | 2026-02-19 04:47:32 | INFO  | Task 939fb2d3-14f8-4b78-98dc-9cef240fe0b2 (bootstrap-basic) was prepared for execution. 2026-02-19 04:47:32.035401 | orchestrator | 2026-02-19 04:47:32 | INFO  | It takes a moment until task 939fb2d3-14f8-4b78-98dc-9cef240fe0b2 (bootstrap-basic) has been started and output is visible here. 2026-02-19 04:48:17.057268 | orchestrator | 2026-02-19 04:48:17.057400 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-19 04:48:17.057414 | orchestrator | 2026-02-19 04:48:17.057422 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 04:48:17.057429 | orchestrator | Thursday 19 February 2026 04:47:36 +0000 (0:00:00.070) 0:00:00.070 ***** 2026-02-19 04:48:17.057437 | orchestrator | ok: [localhost] 2026-02-19 04:48:17.057445 | orchestrator | 2026-02-19 04:48:17.057452 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-19 04:48:17.057458 | orchestrator | Thursday 19 February 2026 04:47:38 +0000 (0:00:01.838) 0:00:01.909 ***** 2026-02-19 04:48:17.057464 | orchestrator | ok: [localhost] 2026-02-19 04:48:17.057470 | orchestrator | 2026-02-19 04:48:17.057476 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-19 04:48:17.057483 | orchestrator | Thursday 19 February 2026 04:47:46 +0000 (0:00:08.231) 0:00:10.141 ***** 2026-02-19 04:48:17.057490 | orchestrator | changed: [localhost] 2026-02-19 04:48:17.057497 | orchestrator | 2026-02-19 04:48:17.057505 | orchestrator | TASK [Create public network] *************************************************** 2026-02-19 04:48:17.057512 | orchestrator | Thursday 19 February 2026 04:47:52 +0000 (0:00:06.357) 0:00:16.499 ***** 2026-02-19 04:48:17.057519 | orchestrator | changed: [localhost] 2026-02-19 04:48:17.057598 | orchestrator | 2026-02-19 04:48:17.057607 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-19 04:48:17.057614 | orchestrator | Thursday 19 February 2026 04:47:58 +0000 (0:00:05.747) 0:00:22.246 ***** 2026-02-19 04:48:17.057624 | orchestrator | changed: [localhost] 2026-02-19 04:48:17.057631 | orchestrator | 2026-02-19 04:48:17.057637 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-19 04:48:17.057644 | orchestrator | Thursday 19 February 2026 04:48:04 +0000 (0:00:06.301) 0:00:28.548 ***** 2026-02-19 04:48:17.057651 | orchestrator | changed: [localhost] 2026-02-19 04:48:17.057657 | orchestrator | 2026-02-19 04:48:17.057664 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-19 04:48:17.057671 | orchestrator | Thursday 19 February 2026 04:48:09 +0000 (0:00:04.544) 0:00:33.093 ***** 2026-02-19 04:48:17.057678 | orchestrator | changed: [localhost] 2026-02-19 04:48:17.057685 | orchestrator | 2026-02-19 04:48:17.057692 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-19 04:48:17.057708 | orchestrator | Thursday 19 February 2026 04:48:13 +0000 (0:00:03.796) 0:00:36.889 ***** 2026-02-19 04:48:17.057714 | orchestrator | ok: [localhost] 2026-02-19 04:48:17.057721 | orchestrator | 2026-02-19 04:48:17.057727 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:48:17.057734 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 04:48:17.057741 | orchestrator | 2026-02-19 04:48:17.057748 | orchestrator | 2026-02-19 04:48:17.057755 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:48:17.057762 | orchestrator | Thursday 19 February 2026 04:48:16 +0000 (0:00:03.634) 0:00:40.523 ***** 2026-02-19 04:48:17.057769 | orchestrator | =============================================================================== 2026-02-19 04:48:17.057776 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.23s 2026-02-19 04:48:17.057783 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.36s 2026-02-19 04:48:17.057790 | orchestrator | Set public network to default ------------------------------------------- 6.30s 2026-02-19 04:48:17.057796 | orchestrator | Create public network --------------------------------------------------- 5.75s 2026-02-19 04:48:17.057825 | orchestrator | Create public subnet ---------------------------------------------------- 4.54s 2026-02-19 04:48:17.057832 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.80s 2026-02-19 04:48:17.057839 | orchestrator | Create manager role ----------------------------------------------------- 3.63s 2026-02-19 04:48:17.057846 | orchestrator | Gathering Facts --------------------------------------------------------- 1.84s 2026-02-19 04:48:19.450657 | orchestrator | 2026-02-19 04:48:19 | INFO  | It takes a moment until task 4055262f-2897-44f0-b3a7-ed53e8d1fd38 (image-manager) has been started and output is visible here. 2026-02-19 04:49:01.020334 | orchestrator | 2026-02-19 04:48:22 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-19 04:49:01.020499 | orchestrator | 2026-02-19 04:48:22 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-19 04:49:01.020523 | orchestrator | 2026-02-19 04:48:22 | INFO  | Importing image Cirros 0.6.2 2026-02-19 04:49:01.020619 | orchestrator | 2026-02-19 04:48:22 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-19 04:49:01.020638 | orchestrator | 2026-02-19 04:48:24 | INFO  | Waiting for image to leave queued state... 2026-02-19 04:49:01.020654 | orchestrator | 2026-02-19 04:48:26 | INFO  | Waiting for import to complete... 2026-02-19 04:49:01.020670 | orchestrator | 2026-02-19 04:48:36 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-19 04:49:01.020686 | orchestrator | 2026-02-19 04:48:37 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-19 04:49:01.020701 | orchestrator | 2026-02-19 04:48:37 | INFO  | Setting internal_version = 0.6.2 2026-02-19 04:49:01.020716 | orchestrator | 2026-02-19 04:48:37 | INFO  | Setting image_original_user = cirros 2026-02-19 04:49:01.020732 | orchestrator | 2026-02-19 04:48:37 | INFO  | Adding tag os:cirros 2026-02-19 04:49:01.020748 | orchestrator | 2026-02-19 04:48:37 | INFO  | Setting property architecture: x86_64 2026-02-19 04:49:01.020763 | orchestrator | 2026-02-19 04:48:37 | INFO  | Setting property hw_disk_bus: scsi 2026-02-19 04:49:01.020777 | orchestrator | 2026-02-19 04:48:37 | INFO  | Setting property hw_rng_model: virtio 2026-02-19 04:49:01.020792 | orchestrator | 2026-02-19 04:48:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-19 04:49:01.020807 | orchestrator | 2026-02-19 04:48:38 | INFO  | Setting property hw_watchdog_action: reset 2026-02-19 04:49:01.020822 | orchestrator | 2026-02-19 04:48:38 | INFO  | Setting property hypervisor_type: qemu 2026-02-19 04:49:01.020835 | orchestrator | 2026-02-19 04:48:38 | INFO  | Setting property os_distro: cirros 2026-02-19 04:49:01.020848 | orchestrator | 2026-02-19 04:48:39 | INFO  | Setting property os_purpose: minimal 2026-02-19 04:49:01.020860 | orchestrator | 2026-02-19 04:48:39 | INFO  | Setting property replace_frequency: never 2026-02-19 04:49:01.020873 | orchestrator | 2026-02-19 04:48:39 | INFO  | Setting property uuid_validity: none 2026-02-19 04:49:01.020885 | orchestrator | 2026-02-19 04:48:39 | INFO  | Setting property provided_until: none 2026-02-19 04:49:01.020899 | orchestrator | 2026-02-19 04:48:40 | INFO  | Setting property image_description: Cirros 2026-02-19 04:49:01.020913 | orchestrator | 2026-02-19 04:48:40 | INFO  | Setting property image_name: Cirros 2026-02-19 04:49:01.020928 | orchestrator | 2026-02-19 04:48:40 | INFO  | Setting property internal_version: 0.6.2 2026-02-19 04:49:01.020943 | orchestrator | 2026-02-19 04:48:41 | INFO  | Setting property image_original_user: cirros 2026-02-19 04:49:01.020997 | orchestrator | 2026-02-19 04:48:41 | INFO  | Setting property os_version: 0.6.2 2026-02-19 04:49:01.021040 | orchestrator | 2026-02-19 04:48:41 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-19 04:49:01.021059 | orchestrator | 2026-02-19 04:48:41 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-19 04:49:01.021073 | orchestrator | 2026-02-19 04:48:42 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-19 04:49:01.021087 | orchestrator | 2026-02-19 04:48:42 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-19 04:49:01.021100 | orchestrator | 2026-02-19 04:48:42 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-19 04:49:01.021115 | orchestrator | 2026-02-19 04:48:42 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-19 04:49:01.021136 | orchestrator | 2026-02-19 04:48:42 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-19 04:49:01.021165 | orchestrator | 2026-02-19 04:48:42 | INFO  | Importing image Cirros 0.6.3 2026-02-19 04:49:01.021189 | orchestrator | 2026-02-19 04:48:42 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-19 04:49:01.021201 | orchestrator | 2026-02-19 04:48:44 | INFO  | Waiting for import to complete... 2026-02-19 04:49:01.021213 | orchestrator | 2026-02-19 04:48:54 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-19 04:49:01.021250 | orchestrator | 2026-02-19 04:48:54 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-19 04:49:01.021266 | orchestrator | 2026-02-19 04:48:54 | INFO  | Setting internal_version = 0.6.3 2026-02-19 04:49:01.021280 | orchestrator | 2026-02-19 04:48:54 | INFO  | Setting image_original_user = cirros 2026-02-19 04:49:01.021294 | orchestrator | 2026-02-19 04:48:54 | INFO  | Adding tag os:cirros 2026-02-19 04:49:01.021308 | orchestrator | 2026-02-19 04:48:55 | INFO  | Setting property architecture: x86_64 2026-02-19 04:49:01.021322 | orchestrator | 2026-02-19 04:48:55 | INFO  | Setting property hw_disk_bus: scsi 2026-02-19 04:49:01.021337 | orchestrator | 2026-02-19 04:48:55 | INFO  | Setting property hw_rng_model: virtio 2026-02-19 04:49:01.021351 | orchestrator | 2026-02-19 04:48:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-19 04:49:01.021365 | orchestrator | 2026-02-19 04:48:56 | INFO  | Setting property hw_watchdog_action: reset 2026-02-19 04:49:01.021380 | orchestrator | 2026-02-19 04:48:56 | INFO  | Setting property hypervisor_type: qemu 2026-02-19 04:49:01.021394 | orchestrator | 2026-02-19 04:48:56 | INFO  | Setting property os_distro: cirros 2026-02-19 04:49:01.021409 | orchestrator | 2026-02-19 04:48:57 | INFO  | Setting property os_purpose: minimal 2026-02-19 04:49:01.021424 | orchestrator | 2026-02-19 04:48:57 | INFO  | Setting property replace_frequency: never 2026-02-19 04:49:01.021439 | orchestrator | 2026-02-19 04:48:57 | INFO  | Setting property uuid_validity: none 2026-02-19 04:49:01.021454 | orchestrator | 2026-02-19 04:48:57 | INFO  | Setting property provided_until: none 2026-02-19 04:49:01.021468 | orchestrator | 2026-02-19 04:48:58 | INFO  | Setting property image_description: Cirros 2026-02-19 04:49:01.021483 | orchestrator | 2026-02-19 04:48:58 | INFO  | Setting property image_name: Cirros 2026-02-19 04:49:01.021497 | orchestrator | 2026-02-19 04:48:58 | INFO  | Setting property internal_version: 0.6.3 2026-02-19 04:49:01.021512 | orchestrator | 2026-02-19 04:48:58 | INFO  | Setting property image_original_user: cirros 2026-02-19 04:49:01.021555 | orchestrator | 2026-02-19 04:48:59 | INFO  | Setting property os_version: 0.6.3 2026-02-19 04:49:01.021571 | orchestrator | 2026-02-19 04:48:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-19 04:49:01.021584 | orchestrator | 2026-02-19 04:48:59 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-19 04:49:01.021597 | orchestrator | 2026-02-19 04:49:00 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-19 04:49:01.021611 | orchestrator | 2026-02-19 04:49:00 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-19 04:49:01.021626 | orchestrator | 2026-02-19 04:49:00 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-19 04:49:01.345501 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-19 04:49:03.743735 | orchestrator | 2026-02-19 04:49:03 | INFO  | date: 2026-02-19 2026-02-19 04:49:03.743854 | orchestrator | 2026-02-19 04:49:03 | INFO  | image: octavia-amphora-haproxy-2024.2.20260219.qcow2 2026-02-19 04:49:03.743897 | orchestrator | 2026-02-19 04:49:03 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260219.qcow2 2026-02-19 04:49:03.743910 | orchestrator | 2026-02-19 04:49:03 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260219.qcow2.CHECKSUM 2026-02-19 04:49:03.969734 | orchestrator | 2026-02-19 04:49:03 | INFO  | checksum: 41e52d9b8c2560231afbd6dab46b1935199b73ad4229b000ce661d902c6e7a0f 2026-02-19 04:49:04.042419 | orchestrator | 2026-02-19 04:49:04 | INFO  | It takes a moment until task 5fec046e-e408-471d-aa0c-751bb8a3c060 (image-manager) has been started and output is visible here. 2026-02-19 04:50:27.943714 | orchestrator | 2026-02-19 04:49:06 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-19' 2026-02-19 04:50:27.943859 | orchestrator | 2026-02-19 04:49:06 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260219.qcow2: 200 2026-02-19 04:50:27.943885 | orchestrator | 2026-02-19 04:49:06 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-19 2026-02-19 04:50:27.943905 | orchestrator | 2026-02-19 04:49:06 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260219.qcow2 2026-02-19 04:50:27.943925 | orchestrator | 2026-02-19 04:49:08 | INFO  | Waiting for image to leave queued state... 2026-02-19 04:50:27.943944 | orchestrator | 2026-02-19 04:49:10 | INFO  | Waiting for import to complete... 2026-02-19 04:50:27.943984 | orchestrator | 2026-02-19 04:49:20 | INFO  | Waiting for import to complete... 2026-02-19 04:50:27.944016 | orchestrator | 2026-02-19 04:49:30 | INFO  | Waiting for import to complete... 2026-02-19 04:50:27.944034 | orchestrator | 2026-02-19 04:49:40 | INFO  | Waiting for import to complete... 2026-02-19 04:50:27.944056 | orchestrator | 2026-02-19 04:49:50 | INFO  | Waiting for import to complete... 2026-02-19 04:50:27.944073 | orchestrator | 2026-02-19 04:50:01 | INFO  | Waiting for import to complete... 2026-02-19 04:50:27.944094 | orchestrator | 2026-02-19 04:50:11 | INFO  | Waiting for import to complete... 2026-02-19 04:50:27.944113 | orchestrator | 2026-02-19 04:50:21 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-19' successfully completed, reloading images 2026-02-19 04:50:27.944134 | orchestrator | 2026-02-19 04:50:21 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-19' 2026-02-19 04:50:27.944183 | orchestrator | 2026-02-19 04:50:21 | INFO  | Setting internal_version = 2026-02-19 2026-02-19 04:50:27.944203 | orchestrator | 2026-02-19 04:50:21 | INFO  | Setting image_original_user = ubuntu 2026-02-19 04:50:27.944223 | orchestrator | 2026-02-19 04:50:21 | INFO  | Adding tag amphora 2026-02-19 04:50:27.944246 | orchestrator | 2026-02-19 04:50:22 | INFO  | Adding tag os:ubuntu 2026-02-19 04:50:27.944268 | orchestrator | 2026-02-19 04:50:22 | INFO  | Setting property architecture: x86_64 2026-02-19 04:50:27.944290 | orchestrator | 2026-02-19 04:50:22 | INFO  | Setting property hw_disk_bus: scsi 2026-02-19 04:50:27.944312 | orchestrator | 2026-02-19 04:50:22 | INFO  | Setting property hw_rng_model: virtio 2026-02-19 04:50:27.944335 | orchestrator | 2026-02-19 04:50:23 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-19 04:50:27.944357 | orchestrator | 2026-02-19 04:50:23 | INFO  | Setting property hw_watchdog_action: reset 2026-02-19 04:50:27.944380 | orchestrator | 2026-02-19 04:50:23 | INFO  | Setting property hypervisor_type: qemu 2026-02-19 04:50:27.944401 | orchestrator | 2026-02-19 04:50:23 | INFO  | Setting property os_distro: ubuntu 2026-02-19 04:50:27.944423 | orchestrator | 2026-02-19 04:50:24 | INFO  | Setting property replace_frequency: quarterly 2026-02-19 04:50:27.944442 | orchestrator | 2026-02-19 04:50:24 | INFO  | Setting property uuid_validity: last-1 2026-02-19 04:50:27.944462 | orchestrator | 2026-02-19 04:50:24 | INFO  | Setting property provided_until: none 2026-02-19 04:50:27.944503 | orchestrator | 2026-02-19 04:50:25 | INFO  | Setting property os_purpose: network 2026-02-19 04:50:27.944523 | orchestrator | 2026-02-19 04:50:25 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-19 04:50:27.944542 | orchestrator | 2026-02-19 04:50:25 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-19 04:50:27.944559 | orchestrator | 2026-02-19 04:50:25 | INFO  | Setting property internal_version: 2026-02-19 2026-02-19 04:50:27.944636 | orchestrator | 2026-02-19 04:50:26 | INFO  | Setting property image_original_user: ubuntu 2026-02-19 04:50:27.944655 | orchestrator | 2026-02-19 04:50:26 | INFO  | Setting property os_version: 2026-02-19 2026-02-19 04:50:27.944674 | orchestrator | 2026-02-19 04:50:26 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260219.qcow2 2026-02-19 04:50:27.944692 | orchestrator | 2026-02-19 04:50:27 | INFO  | Setting property image_build_date: 2026-02-19 2026-02-19 04:50:27.944710 | orchestrator | 2026-02-19 04:50:27 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-19' 2026-02-19 04:50:27.944755 | orchestrator | 2026-02-19 04:50:27 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-19' 2026-02-19 04:50:27.944774 | orchestrator | 2026-02-19 04:50:27 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-19 04:50:27.944792 | orchestrator | 2026-02-19 04:50:27 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-19 04:50:27.944811 | orchestrator | 2026-02-19 04:50:27 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-19 04:50:27.944829 | orchestrator | 2026-02-19 04:50:27 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-19 04:50:28.393321 | orchestrator | ok: Runtime: 0:03:18.762646 2026-02-19 04:50:28.414203 | 2026-02-19 04:50:28.414343 | TASK [Run checks] 2026-02-19 04:50:29.185811 | orchestrator | + set -e 2026-02-19 04:50:29.186134 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 04:50:29.186178 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 04:50:29.186209 | orchestrator | ++ INTERACTIVE=false 2026-02-19 04:50:29.186231 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 04:50:29.186271 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 04:50:29.186310 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-19 04:50:29.186923 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-19 04:50:29.194634 | orchestrator | 2026-02-19 04:50:29.194739 | orchestrator | # CHECK 2026-02-19 04:50:29.194753 | orchestrator | 2026-02-19 04:50:29.194766 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 04:50:29.194783 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 04:50:29.194794 | orchestrator | + echo 2026-02-19 04:50:29.194805 | orchestrator | + echo '# CHECK' 2026-02-19 04:50:29.194816 | orchestrator | + echo 2026-02-19 04:50:29.194832 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-19 04:50:29.195381 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-19 04:50:29.245245 | orchestrator | 2026-02-19 04:50:29.245367 | orchestrator | ## Containers @ testbed-manager 2026-02-19 04:50:29.245393 | orchestrator | 2026-02-19 04:50:29.245417 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-19 04:50:29.245439 | orchestrator | + echo 2026-02-19 04:50:29.245462 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-19 04:50:29.245475 | orchestrator | + echo 2026-02-19 04:50:29.245487 | orchestrator | + osism container testbed-manager ps 2026-02-19 04:50:31.244144 | orchestrator | 2026-02-19 04:50:31 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-19 04:50:31.619053 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-19 04:50:31.619192 | orchestrator | 4d6d364631fa registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-19 04:50:31.619221 | orchestrator | 4eabba2b33e5 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-19 04:50:31.619234 | orchestrator | 55e0a41bce27 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-19 04:50:31.619246 | orchestrator | 7e706f0ac97a registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-19 04:50:31.619258 | orchestrator | 512b130f6942 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-19 04:50:31.619274 | orchestrator | 766f1e3c01cc registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 59 minutes ago Up 58 minutes cephclient 2026-02-19 04:50:31.619286 | orchestrator | 3a555ac8e884 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-19 04:50:31.619297 | orchestrator | c5e8dd4ef991 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-19 04:50:31.619333 | orchestrator | 06229fced94e registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-19 04:50:31.619345 | orchestrator | 8892af0220aa registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-19 04:50:31.619356 | orchestrator | 1d48d8ad552a phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-19 04:50:31.619367 | orchestrator | 7bf93dd7cf96 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-19 04:50:31.619378 | orchestrator | 5528c3b60e4b registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-19 04:50:31.619388 | orchestrator | ae859f2d9feb registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-19 04:50:31.619420 | orchestrator | 6c1ae6a06b72 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-19 04:50:31.619441 | orchestrator | a6a7ff2447b5 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-19 04:50:31.619453 | orchestrator | ff0305ba3bfb registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-19 04:50:31.619464 | orchestrator | 6555158bfd21 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-19 04:50:31.619475 | orchestrator | 86ea2f49d768 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-19 04:50:31.619486 | orchestrator | 623fe60a7d21 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-19 04:50:31.619497 | orchestrator | 5a6383d48233 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-19 04:50:31.619508 | orchestrator | 05d67d9952be registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-19 04:50:31.619526 | orchestrator | 4b070b5ffa32 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-19 04:50:31.619538 | orchestrator | 3209480e16a9 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-19 04:50:31.619549 | orchestrator | d47cdf208275 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-19 04:50:31.619560 | orchestrator | 51468936bff9 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-19 04:50:31.619607 | orchestrator | da781d77edb4 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-19 04:50:31.619619 | orchestrator | 58a2ea38e1e4 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-19 04:50:31.619630 | orchestrator | 1f89d682dd14 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-19 04:50:31.619647 | orchestrator | c4ffe41bfc42 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-19 04:50:31.940486 | orchestrator | 2026-02-19 04:50:31.940599 | orchestrator | ## Images @ testbed-manager 2026-02-19 04:50:31.940614 | orchestrator | 2026-02-19 04:50:31.940621 | orchestrator | + echo 2026-02-19 04:50:31.940627 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-19 04:50:31.940635 | orchestrator | + echo 2026-02-19 04:50:31.940645 | orchestrator | + osism container testbed-manager images 2026-02-19 04:50:34.346386 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-19 04:50:34.346475 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 ca1bb7fb6745 25 hours ago 239MB 2026-02-19 04:50:34.346485 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 3 weeks ago 41.4MB 2026-02-19 04:50:34.346491 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-19 04:50:34.346497 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-19 04:50:34.346502 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-19 04:50:34.346508 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-19 04:50:34.346513 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-19 04:50:34.346521 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-19 04:50:34.346526 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-19 04:50:34.346549 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-19 04:50:34.346555 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-19 04:50:34.346560 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-19 04:50:34.346566 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-19 04:50:34.346591 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-19 04:50:34.346597 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-19 04:50:34.346602 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-19 04:50:34.346607 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-19 04:50:34.346613 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-19 04:50:34.346618 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-19 04:50:34.346624 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-19 04:50:34.346629 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-19 04:50:34.346634 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 7 months ago 226MB 2026-02-19 04:50:34.346640 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-19 04:50:34.346645 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-19 04:50:34.346650 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-19 04:50:34.652807 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-19 04:50:34.653288 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-19 04:50:34.715154 | orchestrator | 2026-02-19 04:50:34.715252 | orchestrator | ## Containers @ testbed-node-0 2026-02-19 04:50:34.715266 | orchestrator | 2026-02-19 04:50:34.715275 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-19 04:50:34.715285 | orchestrator | + echo 2026-02-19 04:50:34.715294 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-19 04:50:34.715303 | orchestrator | + echo 2026-02-19 04:50:34.715312 | orchestrator | + osism container testbed-node-0 ps 2026-02-19 04:50:37.256212 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-19 04:50:37.256335 | orchestrator | eb299b63b6a4 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-19 04:50:37.256372 | orchestrator | ef2228200ff8 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-19 04:50:37.256386 | orchestrator | dbda073f1546 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-02-19 04:50:37.256398 | orchestrator | 195f14e637e8 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-19 04:50:37.256432 | orchestrator | c4b36b1b3504 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-19 04:50:37.256444 | orchestrator | 81d68923a801 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-19 04:50:37.256461 | orchestrator | 2433c053923f registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-19 04:50:37.256474 | orchestrator | 3a65e77a2018 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-19 04:50:37.256486 | orchestrator | 17ad9c875807 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-19 04:50:37.256498 | orchestrator | 0520091a0146 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-19 04:50:37.256509 | orchestrator | a5b2960e74ee registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-19 04:50:37.256520 | orchestrator | 25ca16e31ab7 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-19 04:50:37.256531 | orchestrator | 9ba07593dd23 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-19 04:50:37.256542 | orchestrator | c5392303b3ea registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-19 04:50:37.256553 | orchestrator | 7a4e075767d7 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-19 04:50:37.256564 | orchestrator | ef5331d1bed6 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-19 04:50:37.256611 | orchestrator | 51dc1543087d registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-19 04:50:37.256622 | orchestrator | 434bafb146bc registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-19 04:50:37.256634 | orchestrator | ae2d82be6615 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-19 04:50:37.256671 | orchestrator | fa7f772ccd3b registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-19 04:50:37.256685 | orchestrator | 8eaf4f502054 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-19 04:50:37.256696 | orchestrator | 4b0ed258582d registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-19 04:50:37.256715 | orchestrator | a9301418a679 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-19 04:50:37.256726 | orchestrator | a804722a9202 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-19 04:50:37.256737 | orchestrator | 297b2e65648d registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-19 04:50:37.256753 | orchestrator | 8b35ff199124 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-19 04:50:37.256764 | orchestrator | 17aeac094647 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-19 04:50:37.256777 | orchestrator | 891bf294ff4e registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-19 04:50:37.256788 | orchestrator | 8606f3590514 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-19 04:50:37.256800 | orchestrator | 0baf584937f9 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-19 04:50:37.256811 | orchestrator | 8232dcf76dc9 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-19 04:50:37.256822 | orchestrator | 28c3da1ad3d2 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-19 04:50:37.256833 | orchestrator | 7f95cd3971c0 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-19 04:50:37.256845 | orchestrator | 70331739642f registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-19 04:50:37.256856 | orchestrator | ba966c7ac868 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-19 04:50:37.256867 | orchestrator | 28a5f9726d11 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-19 04:50:37.256879 | orchestrator | ae8e3d90669e registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-19 04:50:37.256891 | orchestrator | b091408570cd registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-19 04:50:37.256903 | orchestrator | 876118fe0b1f registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-19 04:50:37.256922 | orchestrator | 02fe1eedc2d4 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-19 04:50:37.256941 | orchestrator | 04d556fe5e15 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-19 04:50:37.256952 | orchestrator | a276a0d9b5eb registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-19 04:50:37.256970 | orchestrator | eda3c45674ac registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-19 04:50:37.256982 | orchestrator | 930aada18f12 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-19 04:50:37.256993 | orchestrator | f12aa28430e7 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-19 04:50:37.257005 | orchestrator | 64cda6e5eea0 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-19 04:50:37.257016 | orchestrator | 739931cb2a8e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-19 04:50:37.257027 | orchestrator | c4837256479b registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-19 04:50:37.257038 | orchestrator | a26a377abcf4 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-19 04:50:37.257049 | orchestrator | 4ec44bd49f1e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-0 2026-02-19 04:50:37.257060 | orchestrator | bf72554a35eb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-19 04:50:37.257071 | orchestrator | d0a6e5ab4aac registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-19 04:50:37.257082 | orchestrator | fe3507c07ab8 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-19 04:50:37.257094 | orchestrator | 1506fac125a9 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-19 04:50:37.257105 | orchestrator | 442528e44958 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-19 04:50:37.257116 | orchestrator | ff3c3a61e380 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-19 04:50:37.257133 | orchestrator | 17d4f1bb5483 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-19 04:50:37.257142 | orchestrator | 6508c67cd5f9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-19 04:50:37.257155 | orchestrator | 6a3d37282c3b registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-19 04:50:37.257167 | orchestrator | 45e9d6c6a3b0 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-19 04:50:37.257174 | orchestrator | 9cbbd686fd8d registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-19 04:50:37.257181 | orchestrator | 587081b5e76c registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-19 04:50:37.257187 | orchestrator | d06939ea2575 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-19 04:50:37.257194 | orchestrator | cb1c4c33152e registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-19 04:50:37.257205 | orchestrator | 13a11b65bb09 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-02-19 04:50:37.257215 | orchestrator | 010fb9aa2b82 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-19 04:50:37.257232 | orchestrator | b6c5a037de91 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-19 04:50:37.257245 | orchestrator | 0fac03d111ab registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-19 04:50:37.257256 | orchestrator | fbf3e8294320 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-19 04:50:37.257266 | orchestrator | b7f334c71985 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-19 04:50:37.257278 | orchestrator | a74341a8c300 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-19 04:50:37.597308 | orchestrator | 2026-02-19 04:50:37.597432 | orchestrator | ## Images @ testbed-node-0 2026-02-19 04:50:37.597451 | orchestrator | 2026-02-19 04:50:37.597463 | orchestrator | + echo 2026-02-19 04:50:37.597475 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-19 04:50:37.597487 | orchestrator | + echo 2026-02-19 04:50:37.597498 | orchestrator | + osism container testbed-node-0 images 2026-02-19 04:50:40.043181 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-19 04:50:40.043294 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-19 04:50:40.043305 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-19 04:50:40.043312 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-19 04:50:40.043318 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-19 04:50:40.043339 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-19 04:50:40.043346 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-19 04:50:40.043352 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-19 04:50:40.043358 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-19 04:50:40.043364 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-19 04:50:40.043370 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-19 04:50:40.043376 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-19 04:50:40.043382 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-19 04:50:40.043389 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-19 04:50:40.043395 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-19 04:50:40.043401 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-19 04:50:40.043407 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-19 04:50:40.043413 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-19 04:50:40.043419 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-19 04:50:40.043425 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-19 04:50:40.043431 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-19 04:50:40.043437 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-19 04:50:40.043443 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-19 04:50:40.043450 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-19 04:50:40.043456 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-19 04:50:40.043462 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-19 04:50:40.043468 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-19 04:50:40.043474 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-19 04:50:40.043483 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-19 04:50:40.043490 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-19 04:50:40.043496 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-19 04:50:40.043508 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-19 04:50:40.043528 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-19 04:50:40.043534 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-19 04:50:40.043540 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-19 04:50:40.043546 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-19 04:50:40.043552 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-19 04:50:40.043559 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-19 04:50:40.043565 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-19 04:50:40.043619 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-19 04:50:40.043627 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-19 04:50:40.043633 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-19 04:50:40.043639 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-19 04:50:40.043645 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-19 04:50:40.043651 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-19 04:50:40.043657 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-19 04:50:40.043663 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-19 04:50:40.043670 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-19 04:50:40.043676 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-19 04:50:40.043682 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-19 04:50:40.043689 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-19 04:50:40.043695 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-19 04:50:40.043701 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-19 04:50:40.043707 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-19 04:50:40.043713 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-19 04:50:40.043719 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-19 04:50:40.043725 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-19 04:50:40.043736 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-19 04:50:40.043742 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-19 04:50:40.043752 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-19 04:50:40.043758 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-19 04:50:40.043765 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-19 04:50:40.043771 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-19 04:50:40.043777 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-19 04:50:40.043789 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-19 04:50:40.043795 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-19 04:50:40.043801 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-19 04:50:40.043807 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-19 04:50:40.043813 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-19 04:50:40.043819 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-19 04:50:40.344025 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-19 04:50:40.344201 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-19 04:50:40.393538 | orchestrator | 2026-02-19 04:50:40.393760 | orchestrator | ## Containers @ testbed-node-1 2026-02-19 04:50:40.393799 | orchestrator | 2026-02-19 04:50:40.393821 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-19 04:50:40.393841 | orchestrator | + echo 2026-02-19 04:50:40.393861 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-19 04:50:40.393882 | orchestrator | + echo 2026-02-19 04:50:40.393903 | orchestrator | + osism container testbed-node-1 ps 2026-02-19 04:50:42.794643 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-19 04:50:42.794749 | orchestrator | e41cc8def698 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-19 04:50:42.794766 | orchestrator | 57385d5de9a4 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-19 04:50:42.794778 | orchestrator | 4347d1637252 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-19 04:50:42.794789 | orchestrator | ea385d85b625 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-02-19 04:50:42.794802 | orchestrator | 057c61ab4402 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-19 04:50:42.794813 | orchestrator | 74251e287712 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-19 04:50:42.794849 | orchestrator | 240c0fb8661e registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-19 04:50:42.794862 | orchestrator | 90c15f102fe1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-19 04:50:42.794882 | orchestrator | 7413c67c77d1 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-19 04:50:42.794900 | orchestrator | 1b7dbd66b099 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-19 04:50:42.794918 | orchestrator | 66437943ebfa registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-19 04:50:42.794937 | orchestrator | 10bdc10feab8 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-19 04:50:42.794977 | orchestrator | 595f3511273c registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-19 04:50:42.794998 | orchestrator | aa5aa2a14d54 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-19 04:50:42.795017 | orchestrator | 58c1fba211bd registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-19 04:50:42.795037 | orchestrator | 3747a62557f7 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-19 04:50:42.795055 | orchestrator | efdcb589079f registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-19 04:50:42.795074 | orchestrator | 26deceaa6f52 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-19 04:50:42.795087 | orchestrator | 08478e6641e7 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-19 04:50:42.795119 | orchestrator | 8779499e09cf registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-19 04:50:42.795131 | orchestrator | b163cad170c3 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-19 04:50:42.795143 | orchestrator | d2d8b1a62fea registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-19 04:50:42.795156 | orchestrator | b8f941a33921 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-19 04:50:42.795168 | orchestrator | 784c80f05d9f registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-19 04:50:42.795191 | orchestrator | 584cdf232c93 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-19 04:50:42.795204 | orchestrator | 3bc0186a3100 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-19 04:50:42.795216 | orchestrator | 6e6d8755c58c registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-19 04:50:42.795228 | orchestrator | 1a7986cbcdaa registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-19 04:50:42.795241 | orchestrator | 856b00163b4a registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-19 04:50:42.795253 | orchestrator | 42ff7cf52db0 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-19 04:50:42.795266 | orchestrator | 15e13383b6ba registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-19 04:50:42.795278 | orchestrator | 9c17925203cd registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-19 04:50:42.795288 | orchestrator | 50dc397d9d74 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-19 04:50:42.795299 | orchestrator | 59a1c95dcf5c registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-19 04:50:42.795310 | orchestrator | 5f795e165f0f registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-19 04:50:42.795320 | orchestrator | 00dfbc407f56 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-19 04:50:42.795337 | orchestrator | 2a54e30fb3ed registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-19 04:50:42.795349 | orchestrator | 75cc6a783482 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-19 04:50:42.795359 | orchestrator | e4b7377fffda registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-19 04:50:42.795378 | orchestrator | f790709815e7 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-19 04:50:42.795389 | orchestrator | 427c65cda6da registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-19 04:50:42.795406 | orchestrator | c992105b34db registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-19 04:50:42.795417 | orchestrator | 921a4eb86b90 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-19 04:50:42.795427 | orchestrator | 02a827a858f8 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-19 04:50:42.795438 | orchestrator | ef0e27daabd3 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-19 04:50:42.795449 | orchestrator | 2005f99be1cc registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-19 04:50:42.795459 | orchestrator | 00759c720333 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-19 04:50:42.795470 | orchestrator | 6978d718569f registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-19 04:50:42.795480 | orchestrator | 48f8a7583b30 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-19 04:50:42.795491 | orchestrator | 20fe479d804e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-1 2026-02-19 04:50:42.795502 | orchestrator | 318ceaae6139 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-19 04:50:42.795513 | orchestrator | a8e499fc5d9a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-19 04:50:42.795524 | orchestrator | 8bc14f96b620 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-19 04:50:42.795534 | orchestrator | 25680482f6ba registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-19 04:50:42.795545 | orchestrator | 09bba94c5c5d registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-19 04:50:42.795555 | orchestrator | 269a43ba072a registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-19 04:50:42.795566 | orchestrator | 5e08fe99bdc6 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-19 04:50:42.795610 | orchestrator | 5be1ac634b71 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-19 04:50:42.795629 | orchestrator | dab4eee9790d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-19 04:50:42.795655 | orchestrator | c8976aa359a7 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-19 04:50:42.795667 | orchestrator | 604b305fd756 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-19 04:50:42.795677 | orchestrator | b29832967d36 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-19 04:50:42.795688 | orchestrator | 1117b2a7176c registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-19 04:50:42.795699 | orchestrator | 1c43d4bd095d registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-19 04:50:42.795717 | orchestrator | deb92b3eeec8 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-19 04:50:42.795728 | orchestrator | cc8d73570c27 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-19 04:50:42.795738 | orchestrator | e8389920b4b8 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-19 04:50:42.795749 | orchestrator | e1ff1e40b51a registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-19 04:50:42.795760 | orchestrator | 09e7dd3977af registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-19 04:50:42.795776 | orchestrator | 60b311b36208 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-19 04:50:42.795787 | orchestrator | 1c3a231b6f9f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-19 04:50:43.126676 | orchestrator | 2026-02-19 04:50:43.126763 | orchestrator | ## Images @ testbed-node-1 2026-02-19 04:50:43.126776 | orchestrator | 2026-02-19 04:50:43.126786 | orchestrator | + echo 2026-02-19 04:50:43.126796 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-19 04:50:43.126805 | orchestrator | + echo 2026-02-19 04:50:43.126814 | orchestrator | + osism container testbed-node-1 images 2026-02-19 04:50:45.526958 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-19 04:50:45.527052 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-19 04:50:45.527064 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-19 04:50:45.527073 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-19 04:50:45.527083 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-19 04:50:45.527091 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-19 04:50:45.527099 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-19 04:50:45.527133 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-19 04:50:45.527141 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-19 04:50:45.527149 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-19 04:50:45.527157 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-19 04:50:45.527165 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-19 04:50:45.527173 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-19 04:50:45.527181 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-19 04:50:45.527188 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-19 04:50:45.527196 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-19 04:50:45.527204 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-19 04:50:45.527212 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-19 04:50:45.527220 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-19 04:50:45.527227 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-19 04:50:45.527235 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-19 04:50:45.527243 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-19 04:50:45.527251 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-19 04:50:45.527259 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-19 04:50:45.527266 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-19 04:50:45.527274 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-19 04:50:45.527282 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-19 04:50:45.527290 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-19 04:50:45.527298 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-19 04:50:45.527306 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-19 04:50:45.527314 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-19 04:50:45.527321 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-19 04:50:45.527344 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-19 04:50:45.527360 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-19 04:50:45.527368 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-19 04:50:45.527376 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-19 04:50:45.527384 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-19 04:50:45.527391 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-19 04:50:45.527414 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-19 04:50:45.527422 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-19 04:50:45.527430 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-19 04:50:45.527438 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-19 04:50:45.527446 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-19 04:50:45.527454 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-19 04:50:45.527462 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-19 04:50:45.527469 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-19 04:50:45.527477 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-19 04:50:45.527485 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-19 04:50:45.527493 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-19 04:50:45.527502 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-19 04:50:45.527512 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-19 04:50:45.527521 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-19 04:50:45.527530 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-19 04:50:45.527539 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-19 04:50:45.527548 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-19 04:50:45.527557 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-19 04:50:45.527566 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-19 04:50:45.527615 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-19 04:50:45.527629 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-19 04:50:45.527644 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-19 04:50:45.527668 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-19 04:50:45.527681 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-19 04:50:45.527691 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-19 04:50:45.527700 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-19 04:50:45.527717 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-19 04:50:45.527726 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-19 04:50:45.527735 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-19 04:50:45.527746 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-19 04:50:45.527759 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-19 04:50:45.527772 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-19 04:50:45.859998 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-19 04:50:45.860911 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-19 04:50:45.920922 | orchestrator | 2026-02-19 04:50:45.921017 | orchestrator | ## Containers @ testbed-node-2 2026-02-19 04:50:45.921032 | orchestrator | 2026-02-19 04:50:45.921044 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-19 04:50:45.921055 | orchestrator | + echo 2026-02-19 04:50:45.921066 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-19 04:50:45.921078 | orchestrator | + echo 2026-02-19 04:50:45.921089 | orchestrator | + osism container testbed-node-2 ps 2026-02-19 04:50:48.429008 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-19 04:50:48.429108 | orchestrator | 13df34019dc1 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-19 04:50:48.429123 | orchestrator | f94253e2b740 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-19 04:50:48.429134 | orchestrator | 18efc641ad09 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-19 04:50:48.429144 | orchestrator | 52af9bb416f8 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-02-19 04:50:48.429156 | orchestrator | 4103246c4bc6 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-19 04:50:48.429165 | orchestrator | cdb84791ae0d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-19 04:50:48.429175 | orchestrator | a51b491d11ef registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-19 04:50:48.429186 | orchestrator | 643b0ad899ca registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 11 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-19 04:50:48.429221 | orchestrator | 6726a180394d registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-19 04:50:48.429232 | orchestrator | 56c3bae7d40a registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-19 04:50:48.429242 | orchestrator | 546ad4160aac registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-19 04:50:48.429251 | orchestrator | 06846d0d512f registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-19 04:50:48.429279 | orchestrator | 03cbc875e938 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-19 04:50:48.429290 | orchestrator | 1fe5c18173a3 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-19 04:50:48.429299 | orchestrator | ef38ccfdea12 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-19 04:50:48.429309 | orchestrator | 9359ef74d988 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-19 04:50:48.429318 | orchestrator | eac4e5cf1e8a registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-19 04:50:48.429328 | orchestrator | 3da48a671b43 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-19 04:50:48.429337 | orchestrator | e970e4ab9bae registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-02-19 04:50:48.429365 | orchestrator | e8ed60ac7f18 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-19 04:50:48.429375 | orchestrator | f2352a104509 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-19 04:50:48.429384 | orchestrator | 6fc761059579 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-19 04:50:48.429394 | orchestrator | fb3a64a001f5 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-19 04:50:48.429403 | orchestrator | 8aeb107cb4d7 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-19 04:50:48.429413 | orchestrator | 632a5d8f9682 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-19 04:50:48.429431 | orchestrator | 568720f65ab5 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-19 04:50:48.429441 | orchestrator | 4bb07325d6a4 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-19 04:50:48.429450 | orchestrator | b1d25a644d1b registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-19 04:50:48.429460 | orchestrator | 98a7f891751c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-19 04:50:48.429469 | orchestrator | 491157a64a64 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-19 04:50:48.429486 | orchestrator | 9bf77ffc8c20 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-19 04:50:48.429496 | orchestrator | 2b4f20833a1b registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-19 04:50:48.429506 | orchestrator | e2cdc6b598c0 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-19 04:50:48.429515 | orchestrator | 14ac53921933 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-19 04:50:48.429524 | orchestrator | 1448bdaf101b registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-19 04:50:48.429534 | orchestrator | 60eecd379586 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-19 04:50:48.429543 | orchestrator | c3618966068d registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-19 04:50:48.429552 | orchestrator | d4ed94d6e852 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-19 04:50:48.429562 | orchestrator | 03b6e55817e4 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-19 04:50:48.429661 | orchestrator | cc2bd8a90eef registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-19 04:50:48.429688 | orchestrator | 4d2a0ae95931 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-19 04:50:48.429700 | orchestrator | ae0bb92ce37c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-19 04:50:48.429710 | orchestrator | b21932386126 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-19 04:50:48.429729 | orchestrator | d2b9952360bc registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-19 04:50:48.429741 | orchestrator | f5f1b6871ed2 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-19 04:50:48.429752 | orchestrator | 1dd60f42532b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-19 04:50:48.429762 | orchestrator | 5e50ff584c90 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-19 04:50:48.429779 | orchestrator | 4e313fb0a6a2 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-19 04:50:48.429791 | orchestrator | 50534e515515 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-19 04:50:48.429802 | orchestrator | ac64453182fa registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-2 2026-02-19 04:50:48.429813 | orchestrator | ca969d550ce1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-19 04:50:48.429830 | orchestrator | 7f7671ec0784 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-19 04:50:48.429842 | orchestrator | 08ed1d151d78 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-19 04:50:48.429858 | orchestrator | 3d4641873966 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-19 04:50:48.429869 | orchestrator | c3acc3af5321 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-19 04:50:48.429880 | orchestrator | 87d97b0c1aa7 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-19 04:50:48.429891 | orchestrator | d0bc00f8ac47 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-19 04:50:48.429902 | orchestrator | e4075f9a3195 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-19 04:50:48.429913 | orchestrator | 143d1766db43 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-19 04:50:48.429924 | orchestrator | 0ba486bb072d registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-19 04:50:48.429936 | orchestrator | 05a24e814de6 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-19 04:50:48.429953 | orchestrator | 69c3abe8503d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-19 04:50:48.429963 | orchestrator | f695f561b016 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-19 04:50:48.429973 | orchestrator | d61c6e933861 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-19 04:50:48.429982 | orchestrator | bf0decabf928 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-19 04:50:48.429992 | orchestrator | 4175e7eb9e40 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-19 04:50:48.430001 | orchestrator | 2ad4cf319841 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-19 04:50:48.430086 | orchestrator | a7039fe41e8d registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-19 04:50:48.430100 | orchestrator | 47a1a8dd9186 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-19 04:50:48.430110 | orchestrator | 4983cb4eaea0 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-19 04:50:48.430120 | orchestrator | 589fbd5d17da registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-19 04:50:48.753381 | orchestrator | 2026-02-19 04:50:48.753483 | orchestrator | ## Images @ testbed-node-2 2026-02-19 04:50:48.753501 | orchestrator | 2026-02-19 04:50:48.753512 | orchestrator | + echo 2026-02-19 04:50:48.753523 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-19 04:50:48.753534 | orchestrator | + echo 2026-02-19 04:50:48.753545 | orchestrator | + osism container testbed-node-2 images 2026-02-19 04:50:51.186778 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-19 04:50:51.186854 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-19 04:50:51.186860 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-19 04:50:51.186864 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-19 04:50:51.186881 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-19 04:50:51.186885 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-19 04:50:51.186889 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-19 04:50:51.186893 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-19 04:50:51.186898 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-19 04:50:51.186917 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-19 04:50:51.186921 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-19 04:50:51.186928 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-19 04:50:51.186932 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-19 04:50:51.186937 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-19 04:50:51.186941 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-19 04:50:51.186945 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-19 04:50:51.186949 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-19 04:50:51.186953 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-19 04:50:51.186957 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-19 04:50:51.186961 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-19 04:50:51.186965 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-19 04:50:51.186969 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-19 04:50:51.186973 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-19 04:50:51.186977 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-19 04:50:51.186981 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-19 04:50:51.186985 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-19 04:50:51.186989 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-19 04:50:51.186993 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-19 04:50:51.186997 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-19 04:50:51.187001 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-19 04:50:51.187005 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-19 04:50:51.187009 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-19 04:50:51.187023 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-19 04:50:51.187028 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-19 04:50:51.187032 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-19 04:50:51.187036 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-19 04:50:51.187044 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-19 04:50:51.187048 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-19 04:50:51.187052 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-19 04:50:51.187060 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-19 04:50:51.187065 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-19 04:50:51.187069 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-19 04:50:51.187073 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-19 04:50:51.187077 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-19 04:50:51.187081 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-19 04:50:51.187085 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-19 04:50:51.187089 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-19 04:50:51.187093 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-19 04:50:51.187097 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-19 04:50:51.187101 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-19 04:50:51.187105 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-19 04:50:51.187109 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-19 04:50:51.187113 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-19 04:50:51.187117 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-19 04:50:51.187121 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-19 04:50:51.187125 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-19 04:50:51.187129 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-19 04:50:51.187133 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-19 04:50:51.187137 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-19 04:50:51.187141 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-19 04:50:51.187145 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-19 04:50:51.187149 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-19 04:50:51.187156 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-19 04:50:51.187160 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-19 04:50:51.187167 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-19 04:50:51.187171 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-19 04:50:51.187176 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-19 04:50:51.187180 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-19 04:50:51.187186 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-19 04:50:51.187190 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-19 04:50:51.522914 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-19 04:50:51.532860 | orchestrator | + set -e 2026-02-19 04:50:51.532933 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 04:50:51.532944 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 04:50:51.532984 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 04:50:51.532993 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 04:50:51.533000 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 04:50:51.533026 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 04:50:51.533036 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 04:50:51.533043 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 04:50:51.533051 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 04:50:51.533058 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 04:50:51.533066 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 04:50:51.533073 | orchestrator | ++ export ARA=false 2026-02-19 04:50:51.533080 | orchestrator | ++ ARA=false 2026-02-19 04:50:51.533088 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 04:50:51.533095 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 04:50:51.533102 | orchestrator | ++ export TEMPEST=false 2026-02-19 04:50:51.533109 | orchestrator | ++ TEMPEST=false 2026-02-19 04:50:51.533116 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 04:50:51.533123 | orchestrator | ++ IS_ZUUL=true 2026-02-19 04:50:51.533131 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 04:50:51.533138 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 04:50:51.533145 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 04:50:51.533153 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 04:50:51.533231 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 04:50:51.533241 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 04:50:51.533249 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 04:50:51.533257 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 04:50:51.533264 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 04:50:51.533271 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 04:50:51.533278 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-19 04:50:51.533286 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-19 04:50:51.543814 | orchestrator | + set -e 2026-02-19 04:50:51.544734 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 04:50:51.544785 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 04:50:51.544799 | orchestrator | ++ INTERACTIVE=false 2026-02-19 04:50:51.544810 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 04:50:51.544820 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 04:50:51.544831 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-19 04:50:51.545643 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-19 04:50:51.552208 | orchestrator | 2026-02-19 04:50:51.552260 | orchestrator | # Ceph status 2026-02-19 04:50:51.552273 | orchestrator | 2026-02-19 04:50:51.552284 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 04:50:51.552297 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 04:50:51.552308 | orchestrator | + echo 2026-02-19 04:50:51.552319 | orchestrator | + echo '# Ceph status' 2026-02-19 04:50:51.552367 | orchestrator | + echo 2026-02-19 04:50:51.552379 | orchestrator | + ceph -s 2026-02-19 04:50:52.151733 | orchestrator | cluster: 2026-02-19 04:50:52.151862 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-19 04:50:52.151889 | orchestrator | health: HEALTH_OK 2026-02-19 04:50:52.151909 | orchestrator | 2026-02-19 04:50:52.151928 | orchestrator | services: 2026-02-19 04:50:52.151947 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 70m) 2026-02-19 04:50:52.151984 | orchestrator | mgr: testbed-node-0(active, since 57m), standbys: testbed-node-1, testbed-node-2 2026-02-19 04:50:52.152005 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-19 04:50:52.152023 | orchestrator | osd: 6 osds: 6 up (since 66m), 6 in (since 67m) 2026-02-19 04:50:52.152043 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-19 04:50:52.152061 | orchestrator | 2026-02-19 04:50:52.152080 | orchestrator | data: 2026-02-19 04:50:52.152099 | orchestrator | volumes: 1/1 healthy 2026-02-19 04:50:52.152118 | orchestrator | pools: 14 pools, 401 pgs 2026-02-19 04:50:52.152137 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-19 04:50:52.152156 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-19 04:50:52.152177 | orchestrator | pgs: 401 active+clean 2026-02-19 04:50:52.152197 | orchestrator | 2026-02-19 04:50:52.208411 | orchestrator | 2026-02-19 04:50:52.208530 | orchestrator | # Ceph versions 2026-02-19 04:50:52.208555 | orchestrator | 2026-02-19 04:50:52.208606 | orchestrator | + echo 2026-02-19 04:50:52.208624 | orchestrator | + echo '# Ceph versions' 2026-02-19 04:50:52.208636 | orchestrator | + echo 2026-02-19 04:50:52.208647 | orchestrator | + ceph versions 2026-02-19 04:50:52.807836 | orchestrator | { 2026-02-19 04:50:52.807951 | orchestrator | "mon": { 2026-02-19 04:50:52.807974 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-19 04:50:52.807995 | orchestrator | }, 2026-02-19 04:50:52.808014 | orchestrator | "mgr": { 2026-02-19 04:50:52.808030 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-19 04:50:52.808049 | orchestrator | }, 2026-02-19 04:50:52.808068 | orchestrator | "osd": { 2026-02-19 04:50:52.808085 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-19 04:50:52.808103 | orchestrator | }, 2026-02-19 04:50:52.808122 | orchestrator | "mds": { 2026-02-19 04:50:52.808142 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-19 04:50:52.808161 | orchestrator | }, 2026-02-19 04:50:52.808181 | orchestrator | "rgw": { 2026-02-19 04:50:52.808194 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-19 04:50:52.808205 | orchestrator | }, 2026-02-19 04:50:52.808215 | orchestrator | "overall": { 2026-02-19 04:50:52.808227 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-19 04:50:52.808238 | orchestrator | } 2026-02-19 04:50:52.808249 | orchestrator | } 2026-02-19 04:50:52.857416 | orchestrator | 2026-02-19 04:50:52.857501 | orchestrator | # Ceph OSD tree 2026-02-19 04:50:52.857512 | orchestrator | 2026-02-19 04:50:52.857521 | orchestrator | + echo 2026-02-19 04:50:52.857529 | orchestrator | + echo '# Ceph OSD tree' 2026-02-19 04:50:52.857538 | orchestrator | + echo 2026-02-19 04:50:52.857546 | orchestrator | + ceph osd df tree 2026-02-19 04:50:53.440937 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-19 04:50:53.441071 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 406 MiB 113 GiB 5.90 1.00 - root default 2026-02-19 04:50:53.441094 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 0.99 - host testbed-node-3 2026-02-19 04:50:53.441111 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 62 MiB 18 GiB 7.51 1.27 201 up osd.0 2026-02-19 04:50:53.441128 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 864 MiB 803 MiB 1 KiB 62 MiB 19 GiB 4.22 0.72 189 up osd.5 2026-02-19 04:50:53.441144 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-02-19 04:50:53.441162 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 74 MiB 18 GiB 7.77 1.32 189 up osd.2 2026-02-19 04:50:53.441210 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 828 MiB 763 MiB 1 KiB 66 MiB 19 GiB 4.05 0.69 199 up osd.3 2026-02-19 04:50:53.441225 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-02-19 04:50:53.441243 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.65 0.96 190 up osd.1 2026-02-19 04:50:53.441260 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 78 MiB 19 GiB 6.18 1.05 202 up osd.4 2026-02-19 04:50:53.441277 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 406 MiB 113 GiB 5.90 2026-02-19 04:50:53.441293 | orchestrator | MIN/MAX VAR: 0.69/1.32 STDDEV: 1.44 2026-02-19 04:50:53.485747 | orchestrator | 2026-02-19 04:50:53.485838 | orchestrator | # Ceph monitor status 2026-02-19 04:50:53.485851 | orchestrator | 2026-02-19 04:50:53.485863 | orchestrator | + echo 2026-02-19 04:50:53.485874 | orchestrator | + echo '# Ceph monitor status' 2026-02-19 04:50:53.485885 | orchestrator | + echo 2026-02-19 04:50:53.485895 | orchestrator | + ceph mon stat 2026-02-19 04:50:54.066516 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-19 04:50:54.108902 | orchestrator | 2026-02-19 04:50:54.109051 | orchestrator | # Ceph quorum status 2026-02-19 04:50:54.109067 | orchestrator | 2026-02-19 04:50:54.109079 | orchestrator | + echo 2026-02-19 04:50:54.109090 | orchestrator | + echo '# Ceph quorum status' 2026-02-19 04:50:54.109101 | orchestrator | + echo 2026-02-19 04:50:54.109180 | orchestrator | + ceph quorum_status 2026-02-19 04:50:54.109195 | orchestrator | + jq 2026-02-19 04:50:54.736179 | orchestrator | { 2026-02-19 04:50:54.736444 | orchestrator | "election_epoch": 6, 2026-02-19 04:50:54.736476 | orchestrator | "quorum": [ 2026-02-19 04:50:54.736494 | orchestrator | 0, 2026-02-19 04:50:54.736511 | orchestrator | 1, 2026-02-19 04:50:54.736527 | orchestrator | 2 2026-02-19 04:50:54.736544 | orchestrator | ], 2026-02-19 04:50:54.736560 | orchestrator | "quorum_names": [ 2026-02-19 04:50:54.736608 | orchestrator | "testbed-node-0", 2026-02-19 04:50:54.736628 | orchestrator | "testbed-node-1", 2026-02-19 04:50:54.736645 | orchestrator | "testbed-node-2" 2026-02-19 04:50:54.736662 | orchestrator | ], 2026-02-19 04:50:54.736679 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-19 04:50:54.736698 | orchestrator | "quorum_age": 4217, 2026-02-19 04:50:54.736716 | orchestrator | "features": { 2026-02-19 04:50:54.736734 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-19 04:50:54.736752 | orchestrator | "quorum_mon": [ 2026-02-19 04:50:54.736770 | orchestrator | "kraken", 2026-02-19 04:50:54.736789 | orchestrator | "luminous", 2026-02-19 04:50:54.736809 | orchestrator | "mimic", 2026-02-19 04:50:54.736827 | orchestrator | "osdmap-prune", 2026-02-19 04:50:54.736843 | orchestrator | "nautilus", 2026-02-19 04:50:54.736861 | orchestrator | "octopus", 2026-02-19 04:50:54.736879 | orchestrator | "pacific", 2026-02-19 04:50:54.736898 | orchestrator | "elector-pinging", 2026-02-19 04:50:54.736916 | orchestrator | "quincy", 2026-02-19 04:50:54.736934 | orchestrator | "reef" 2026-02-19 04:50:54.736954 | orchestrator | ] 2026-02-19 04:50:54.736972 | orchestrator | }, 2026-02-19 04:50:54.736991 | orchestrator | "monmap": { 2026-02-19 04:50:54.737009 | orchestrator | "epoch": 1, 2026-02-19 04:50:54.737028 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-19 04:50:54.737048 | orchestrator | "modified": "2026-02-19T03:40:15.002935Z", 2026-02-19 04:50:54.737068 | orchestrator | "created": "2026-02-19T03:40:15.002935Z", 2026-02-19 04:50:54.737088 | orchestrator | "min_mon_release": 18, 2026-02-19 04:50:54.737106 | orchestrator | "min_mon_release_name": "reef", 2026-02-19 04:50:54.737123 | orchestrator | "election_strategy": 1, 2026-02-19 04:50:54.737140 | orchestrator | "disallowed_leaders: ": "", 2026-02-19 04:50:54.737158 | orchestrator | "stretch_mode": false, 2026-02-19 04:50:54.737176 | orchestrator | "tiebreaker_mon": "", 2026-02-19 04:50:54.737195 | orchestrator | "removed_ranks: ": "", 2026-02-19 04:50:54.737212 | orchestrator | "features": { 2026-02-19 04:50:54.737230 | orchestrator | "persistent": [ 2026-02-19 04:50:54.737249 | orchestrator | "kraken", 2026-02-19 04:50:54.737266 | orchestrator | "luminous", 2026-02-19 04:50:54.737318 | orchestrator | "mimic", 2026-02-19 04:50:54.737339 | orchestrator | "osdmap-prune", 2026-02-19 04:50:54.737356 | orchestrator | "nautilus", 2026-02-19 04:50:54.737373 | orchestrator | "octopus", 2026-02-19 04:50:54.737391 | orchestrator | "pacific", 2026-02-19 04:50:54.737410 | orchestrator | "elector-pinging", 2026-02-19 04:50:54.737428 | orchestrator | "quincy", 2026-02-19 04:50:54.737447 | orchestrator | "reef" 2026-02-19 04:50:54.737466 | orchestrator | ], 2026-02-19 04:50:54.737483 | orchestrator | "optional": [] 2026-02-19 04:50:54.737501 | orchestrator | }, 2026-02-19 04:50:54.737519 | orchestrator | "mons": [ 2026-02-19 04:50:54.737537 | orchestrator | { 2026-02-19 04:50:54.737640 | orchestrator | "rank": 0, 2026-02-19 04:50:54.737667 | orchestrator | "name": "testbed-node-0", 2026-02-19 04:50:54.737687 | orchestrator | "public_addrs": { 2026-02-19 04:50:54.737706 | orchestrator | "addrvec": [ 2026-02-19 04:50:54.737726 | orchestrator | { 2026-02-19 04:50:54.737766 | orchestrator | "type": "v2", 2026-02-19 04:50:54.737802 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-19 04:50:54.737821 | orchestrator | "nonce": 0 2026-02-19 04:50:54.737840 | orchestrator | }, 2026-02-19 04:50:54.737859 | orchestrator | { 2026-02-19 04:50:54.737877 | orchestrator | "type": "v1", 2026-02-19 04:50:54.737895 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-19 04:50:54.737914 | orchestrator | "nonce": 0 2026-02-19 04:50:54.737934 | orchestrator | } 2026-02-19 04:50:54.737952 | orchestrator | ] 2026-02-19 04:50:54.737971 | orchestrator | }, 2026-02-19 04:50:54.737989 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-19 04:50:54.738007 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-19 04:50:54.738131 | orchestrator | "priority": 0, 2026-02-19 04:50:54.738151 | orchestrator | "weight": 0, 2026-02-19 04:50:54.738168 | orchestrator | "crush_location": "{}" 2026-02-19 04:50:54.738185 | orchestrator | }, 2026-02-19 04:50:54.738202 | orchestrator | { 2026-02-19 04:50:54.738219 | orchestrator | "rank": 1, 2026-02-19 04:50:54.738236 | orchestrator | "name": "testbed-node-1", 2026-02-19 04:50:54.738254 | orchestrator | "public_addrs": { 2026-02-19 04:50:54.738271 | orchestrator | "addrvec": [ 2026-02-19 04:50:54.738287 | orchestrator | { 2026-02-19 04:50:54.738305 | orchestrator | "type": "v2", 2026-02-19 04:50:54.738322 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-19 04:50:54.738339 | orchestrator | "nonce": 0 2026-02-19 04:50:54.738357 | orchestrator | }, 2026-02-19 04:50:54.738376 | orchestrator | { 2026-02-19 04:50:54.738393 | orchestrator | "type": "v1", 2026-02-19 04:50:54.738410 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-19 04:50:54.738428 | orchestrator | "nonce": 0 2026-02-19 04:50:54.738445 | orchestrator | } 2026-02-19 04:50:54.738462 | orchestrator | ] 2026-02-19 04:50:54.738481 | orchestrator | }, 2026-02-19 04:50:54.738498 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-19 04:50:54.738516 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-19 04:50:54.738534 | orchestrator | "priority": 0, 2026-02-19 04:50:54.738552 | orchestrator | "weight": 0, 2026-02-19 04:50:54.738570 | orchestrator | "crush_location": "{}" 2026-02-19 04:50:54.738674 | orchestrator | }, 2026-02-19 04:50:54.738693 | orchestrator | { 2026-02-19 04:50:54.738709 | orchestrator | "rank": 2, 2026-02-19 04:50:54.738727 | orchestrator | "name": "testbed-node-2", 2026-02-19 04:50:54.738744 | orchestrator | "public_addrs": { 2026-02-19 04:50:54.738759 | orchestrator | "addrvec": [ 2026-02-19 04:50:54.738769 | orchestrator | { 2026-02-19 04:50:54.738778 | orchestrator | "type": "v2", 2026-02-19 04:50:54.738787 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-19 04:50:54.738797 | orchestrator | "nonce": 0 2026-02-19 04:50:54.738806 | orchestrator | }, 2026-02-19 04:50:54.738816 | orchestrator | { 2026-02-19 04:50:54.738825 | orchestrator | "type": "v1", 2026-02-19 04:50:54.738834 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-19 04:50:54.738844 | orchestrator | "nonce": 0 2026-02-19 04:50:54.738854 | orchestrator | } 2026-02-19 04:50:54.738863 | orchestrator | ] 2026-02-19 04:50:54.738872 | orchestrator | }, 2026-02-19 04:50:54.738882 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-19 04:50:54.738891 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-19 04:50:54.738901 | orchestrator | "priority": 0, 2026-02-19 04:50:54.738927 | orchestrator | "weight": 0, 2026-02-19 04:50:54.738937 | orchestrator | "crush_location": "{}" 2026-02-19 04:50:54.738946 | orchestrator | } 2026-02-19 04:50:54.738956 | orchestrator | ] 2026-02-19 04:50:54.738965 | orchestrator | } 2026-02-19 04:50:54.738974 | orchestrator | } 2026-02-19 04:50:54.739002 | orchestrator | 2026-02-19 04:50:54.739012 | orchestrator | # Ceph free space status 2026-02-19 04:50:54.739021 | orchestrator | 2026-02-19 04:50:54.739031 | orchestrator | + echo 2026-02-19 04:50:54.739040 | orchestrator | + echo '# Ceph free space status' 2026-02-19 04:50:54.739050 | orchestrator | + echo 2026-02-19 04:50:54.739059 | orchestrator | + ceph df 2026-02-19 04:50:55.378381 | orchestrator | --- RAW STORAGE --- 2026-02-19 04:50:55.378486 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-19 04:50:55.378523 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-02-19 04:50:55.378563 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-02-19 04:50:55.378658 | orchestrator | 2026-02-19 04:50:55.378677 | orchestrator | --- POOLS --- 2026-02-19 04:50:55.378689 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-19 04:50:55.378701 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-19 04:50:55.378712 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-19 04:50:55.378722 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-19 04:50:55.378733 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-19 04:50:55.378744 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-19 04:50:55.378755 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-19 04:50:55.378766 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-19 04:50:55.378777 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-19 04:50:55.378788 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-02-19 04:50:55.378798 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-19 04:50:55.378809 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-19 04:50:55.378820 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-02-19 04:50:55.378830 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-19 04:50:55.378841 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-19 04:50:55.427489 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-19 04:50:55.486531 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-19 04:50:55.486690 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-19 04:50:55.486706 | orchestrator | + osism apply facts 2026-02-19 04:50:57.563882 | orchestrator | 2026-02-19 04:50:57 | INFO  | Task 9e1c9933-2045-46b1-83f4-2bed98f10fa4 (facts) was prepared for execution. 2026-02-19 04:50:57.563982 | orchestrator | 2026-02-19 04:50:57 | INFO  | It takes a moment until task 9e1c9933-2045-46b1-83f4-2bed98f10fa4 (facts) has been started and output is visible here. 2026-02-19 04:51:11.563872 | orchestrator | 2026-02-19 04:51:11.563995 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-19 04:51:11.564022 | orchestrator | 2026-02-19 04:51:11.564043 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-19 04:51:11.564065 | orchestrator | Thursday 19 February 2026 04:51:01 +0000 (0:00:00.276) 0:00:00.276 ***** 2026-02-19 04:51:11.564085 | orchestrator | ok: [testbed-manager] 2026-02-19 04:51:11.564106 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:51:11.564126 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:11.564158 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:51:11.564176 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:51:11.564197 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:51:11.564218 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:51:11.564237 | orchestrator | 2026-02-19 04:51:11.564259 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-19 04:51:11.564313 | orchestrator | Thursday 19 February 2026 04:51:03 +0000 (0:00:01.198) 0:00:01.474 ***** 2026-02-19 04:51:11.564335 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:51:11.564356 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:11.564375 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:51:11.564394 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:51:11.564415 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:51:11.564436 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:51:11.564456 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:51:11.564471 | orchestrator | 2026-02-19 04:51:11.564482 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-19 04:51:11.564496 | orchestrator | 2026-02-19 04:51:11.564514 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-19 04:51:11.564532 | orchestrator | Thursday 19 February 2026 04:51:04 +0000 (0:00:01.397) 0:00:02.872 ***** 2026-02-19 04:51:11.564550 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:51:11.564569 | orchestrator | ok: [testbed-manager] 2026-02-19 04:51:11.564616 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:51:11.564636 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:11.564655 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:51:11.564674 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:51:11.564691 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:51:11.564710 | orchestrator | 2026-02-19 04:51:11.564728 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-19 04:51:11.564749 | orchestrator | 2026-02-19 04:51:11.564768 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-19 04:51:11.564788 | orchestrator | Thursday 19 February 2026 04:51:10 +0000 (0:00:05.877) 0:00:08.750 ***** 2026-02-19 04:51:11.564807 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:51:11.564826 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:11.564844 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:51:11.564863 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:51:11.564882 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:51:11.564899 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:51:11.564916 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:51:11.564935 | orchestrator | 2026-02-19 04:51:11.564954 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:51:11.564973 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:11.564994 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:11.565013 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:11.565049 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:11.565069 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:11.565087 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:11.565105 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:11.565123 | orchestrator | 2026-02-19 04:51:11.565140 | orchestrator | 2026-02-19 04:51:11.565159 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:51:11.565178 | orchestrator | Thursday 19 February 2026 04:51:11 +0000 (0:00:00.610) 0:00:09.360 ***** 2026-02-19 04:51:11.565197 | orchestrator | =============================================================================== 2026-02-19 04:51:11.565215 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.88s 2026-02-19 04:51:11.565250 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.40s 2026-02-19 04:51:11.565268 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2026-02-19 04:51:11.565287 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-02-19 04:51:11.947688 | orchestrator | + osism validate ceph-mons 2026-02-19 04:51:45.132036 | orchestrator | 2026-02-19 04:51:45.132132 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-19 04:51:45.132144 | orchestrator | 2026-02-19 04:51:45.132152 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-19 04:51:45.132161 | orchestrator | Thursday 19 February 2026 04:51:28 +0000 (0:00:00.429) 0:00:00.429 ***** 2026-02-19 04:51:45.132169 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:51:45.132176 | orchestrator | 2026-02-19 04:51:45.132183 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-19 04:51:45.132191 | orchestrator | Thursday 19 February 2026 04:51:29 +0000 (0:00:00.864) 0:00:01.294 ***** 2026-02-19 04:51:45.132198 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:51:45.132206 | orchestrator | 2026-02-19 04:51:45.132213 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-19 04:51:45.132220 | orchestrator | Thursday 19 February 2026 04:51:30 +0000 (0:00:01.010) 0:00:02.305 ***** 2026-02-19 04:51:45.132227 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132235 | orchestrator | 2026-02-19 04:51:45.132246 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-19 04:51:45.132253 | orchestrator | Thursday 19 February 2026 04:51:30 +0000 (0:00:00.123) 0:00:02.428 ***** 2026-02-19 04:51:45.132260 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132268 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:51:45.132275 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:51:45.132281 | orchestrator | 2026-02-19 04:51:45.132288 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-19 04:51:45.132296 | orchestrator | Thursday 19 February 2026 04:51:31 +0000 (0:00:00.319) 0:00:02.747 ***** 2026-02-19 04:51:45.132303 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:51:45.132310 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:51:45.132317 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132324 | orchestrator | 2026-02-19 04:51:45.132331 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-19 04:51:45.132338 | orchestrator | Thursday 19 February 2026 04:51:32 +0000 (0:00:01.097) 0:00:03.844 ***** 2026-02-19 04:51:45.132345 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.132353 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:51:45.132360 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:51:45.132367 | orchestrator | 2026-02-19 04:51:45.132374 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-19 04:51:45.132381 | orchestrator | Thursday 19 February 2026 04:51:32 +0000 (0:00:00.307) 0:00:04.151 ***** 2026-02-19 04:51:45.132388 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132395 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:51:45.132402 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:51:45.132409 | orchestrator | 2026-02-19 04:51:45.132416 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-19 04:51:45.132423 | orchestrator | Thursday 19 February 2026 04:51:33 +0000 (0:00:00.542) 0:00:04.694 ***** 2026-02-19 04:51:45.132430 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132437 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:51:45.132443 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:51:45.132449 | orchestrator | 2026-02-19 04:51:45.132455 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-19 04:51:45.132461 | orchestrator | Thursday 19 February 2026 04:51:33 +0000 (0:00:00.321) 0:00:05.016 ***** 2026-02-19 04:51:45.132467 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.132499 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:51:45.132507 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:51:45.132513 | orchestrator | 2026-02-19 04:51:45.132520 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-19 04:51:45.132527 | orchestrator | Thursday 19 February 2026 04:51:33 +0000 (0:00:00.304) 0:00:05.321 ***** 2026-02-19 04:51:45.132534 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132542 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:51:45.132549 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:51:45.132558 | orchestrator | 2026-02-19 04:51:45.132566 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-19 04:51:45.132574 | orchestrator | Thursday 19 February 2026 04:51:34 +0000 (0:00:00.483) 0:00:05.804 ***** 2026-02-19 04:51:45.132581 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.132589 | orchestrator | 2026-02-19 04:51:45.132614 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-19 04:51:45.132622 | orchestrator | Thursday 19 February 2026 04:51:34 +0000 (0:00:00.279) 0:00:06.084 ***** 2026-02-19 04:51:45.132630 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.132638 | orchestrator | 2026-02-19 04:51:45.132646 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-19 04:51:45.132654 | orchestrator | Thursday 19 February 2026 04:51:34 +0000 (0:00:00.279) 0:00:06.363 ***** 2026-02-19 04:51:45.132662 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.132671 | orchestrator | 2026-02-19 04:51:45.132679 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:51:45.132686 | orchestrator | Thursday 19 February 2026 04:51:35 +0000 (0:00:00.269) 0:00:06.633 ***** 2026-02-19 04:51:45.132694 | orchestrator | 2026-02-19 04:51:45.132702 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:51:45.132709 | orchestrator | Thursday 19 February 2026 04:51:35 +0000 (0:00:00.076) 0:00:06.709 ***** 2026-02-19 04:51:45.132717 | orchestrator | 2026-02-19 04:51:45.132725 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:51:45.132732 | orchestrator | Thursday 19 February 2026 04:51:35 +0000 (0:00:00.073) 0:00:06.782 ***** 2026-02-19 04:51:45.132740 | orchestrator | 2026-02-19 04:51:45.132747 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-19 04:51:45.132755 | orchestrator | Thursday 19 February 2026 04:51:35 +0000 (0:00:00.090) 0:00:06.873 ***** 2026-02-19 04:51:45.132762 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.132771 | orchestrator | 2026-02-19 04:51:45.132778 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-19 04:51:45.132801 | orchestrator | Thursday 19 February 2026 04:51:35 +0000 (0:00:00.253) 0:00:07.127 ***** 2026-02-19 04:51:45.132810 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.132818 | orchestrator | 2026-02-19 04:51:45.132841 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-19 04:51:45.132850 | orchestrator | Thursday 19 February 2026 04:51:35 +0000 (0:00:00.257) 0:00:07.384 ***** 2026-02-19 04:51:45.132858 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132866 | orchestrator | 2026-02-19 04:51:45.132874 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-19 04:51:45.132881 | orchestrator | Thursday 19 February 2026 04:51:35 +0000 (0:00:00.127) 0:00:07.512 ***** 2026-02-19 04:51:45.132890 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:51:45.132902 | orchestrator | 2026-02-19 04:51:45.132909 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-19 04:51:45.132916 | orchestrator | Thursday 19 February 2026 04:51:37 +0000 (0:00:01.741) 0:00:09.254 ***** 2026-02-19 04:51:45.132923 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132930 | orchestrator | 2026-02-19 04:51:45.132937 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-19 04:51:45.132944 | orchestrator | Thursday 19 February 2026 04:51:38 +0000 (0:00:00.561) 0:00:09.816 ***** 2026-02-19 04:51:45.132951 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.132964 | orchestrator | 2026-02-19 04:51:45.132971 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-19 04:51:45.132978 | orchestrator | Thursday 19 February 2026 04:51:38 +0000 (0:00:00.132) 0:00:09.949 ***** 2026-02-19 04:51:45.132985 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.132992 | orchestrator | 2026-02-19 04:51:45.132999 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-19 04:51:45.133006 | orchestrator | Thursday 19 February 2026 04:51:38 +0000 (0:00:00.335) 0:00:10.285 ***** 2026-02-19 04:51:45.133012 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.133018 | orchestrator | 2026-02-19 04:51:45.133025 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-19 04:51:45.133032 | orchestrator | Thursday 19 February 2026 04:51:39 +0000 (0:00:00.308) 0:00:10.593 ***** 2026-02-19 04:51:45.133038 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.133046 | orchestrator | 2026-02-19 04:51:45.133052 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-19 04:51:45.133059 | orchestrator | Thursday 19 February 2026 04:51:39 +0000 (0:00:00.131) 0:00:10.724 ***** 2026-02-19 04:51:45.133066 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.133072 | orchestrator | 2026-02-19 04:51:45.133079 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-19 04:51:45.133085 | orchestrator | Thursday 19 February 2026 04:51:39 +0000 (0:00:00.129) 0:00:10.854 ***** 2026-02-19 04:51:45.133092 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.133099 | orchestrator | 2026-02-19 04:51:45.133106 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-19 04:51:45.133113 | orchestrator | Thursday 19 February 2026 04:51:39 +0000 (0:00:00.128) 0:00:10.982 ***** 2026-02-19 04:51:45.133120 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:51:45.133127 | orchestrator | 2026-02-19 04:51:45.133134 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-19 04:51:45.133141 | orchestrator | Thursday 19 February 2026 04:51:40 +0000 (0:00:01.461) 0:00:12.443 ***** 2026-02-19 04:51:45.133148 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.133155 | orchestrator | 2026-02-19 04:51:45.133162 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-19 04:51:45.133169 | orchestrator | Thursday 19 February 2026 04:51:41 +0000 (0:00:00.296) 0:00:12.740 ***** 2026-02-19 04:51:45.133176 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.133183 | orchestrator | 2026-02-19 04:51:45.133190 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-19 04:51:45.133197 | orchestrator | Thursday 19 February 2026 04:51:41 +0000 (0:00:00.155) 0:00:12.895 ***** 2026-02-19 04:51:45.133204 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:51:45.133211 | orchestrator | 2026-02-19 04:51:45.133217 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-19 04:51:45.133224 | orchestrator | Thursday 19 February 2026 04:51:41 +0000 (0:00:00.134) 0:00:13.030 ***** 2026-02-19 04:51:45.133231 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.133238 | orchestrator | 2026-02-19 04:51:45.133245 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-19 04:51:45.133252 | orchestrator | Thursday 19 February 2026 04:51:41 +0000 (0:00:00.119) 0:00:13.149 ***** 2026-02-19 04:51:45.133264 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.133271 | orchestrator | 2026-02-19 04:51:45.133278 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-19 04:51:45.133285 | orchestrator | Thursday 19 February 2026 04:51:41 +0000 (0:00:00.338) 0:00:13.488 ***** 2026-02-19 04:51:45.133292 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:51:45.133299 | orchestrator | 2026-02-19 04:51:45.133306 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-19 04:51:45.133313 | orchestrator | Thursday 19 February 2026 04:51:42 +0000 (0:00:00.300) 0:00:13.788 ***** 2026-02-19 04:51:45.133325 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:51:45.133332 | orchestrator | 2026-02-19 04:51:45.133340 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-19 04:51:45.133347 | orchestrator | Thursday 19 February 2026 04:51:42 +0000 (0:00:00.253) 0:00:14.042 ***** 2026-02-19 04:51:45.133354 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:51:45.133361 | orchestrator | 2026-02-19 04:51:45.133369 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-19 04:51:45.133376 | orchestrator | Thursday 19 February 2026 04:51:44 +0000 (0:00:01.807) 0:00:15.850 ***** 2026-02-19 04:51:45.133383 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:51:45.133390 | orchestrator | 2026-02-19 04:51:45.133396 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-19 04:51:45.133403 | orchestrator | Thursday 19 February 2026 04:51:44 +0000 (0:00:00.336) 0:00:16.186 ***** 2026-02-19 04:51:45.133411 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:51:45.133418 | orchestrator | 2026-02-19 04:51:45.133430 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:51:47.881915 | orchestrator | Thursday 19 February 2026 04:51:44 +0000 (0:00:00.272) 0:00:16.459 ***** 2026-02-19 04:51:47.882109 | orchestrator | 2026-02-19 04:51:47.882143 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:51:47.882156 | orchestrator | Thursday 19 February 2026 04:51:44 +0000 (0:00:00.071) 0:00:16.530 ***** 2026-02-19 04:51:47.882167 | orchestrator | 2026-02-19 04:51:47.882179 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:51:47.882190 | orchestrator | Thursday 19 February 2026 04:51:45 +0000 (0:00:00.070) 0:00:16.601 ***** 2026-02-19 04:51:47.882200 | orchestrator | 2026-02-19 04:51:47.882211 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-19 04:51:47.882222 | orchestrator | Thursday 19 February 2026 04:51:45 +0000 (0:00:00.074) 0:00:16.675 ***** 2026-02-19 04:51:47.882232 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:51:47.882243 | orchestrator | 2026-02-19 04:51:47.882254 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-19 04:51:47.882264 | orchestrator | Thursday 19 February 2026 04:51:46 +0000 (0:00:01.510) 0:00:18.186 ***** 2026-02-19 04:51:47.882275 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-19 04:51:47.882285 | orchestrator |  "msg": [ 2026-02-19 04:51:47.882297 | orchestrator |  "Validator run completed.", 2026-02-19 04:51:47.882308 | orchestrator |  "You can find the report file here:", 2026-02-19 04:51:47.882319 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-19T04:51:29+00:00-report.json", 2026-02-19 04:51:47.882330 | orchestrator |  "on the following host:", 2026-02-19 04:51:47.882341 | orchestrator |  "testbed-manager" 2026-02-19 04:51:47.882352 | orchestrator |  ] 2026-02-19 04:51:47.882362 | orchestrator | } 2026-02-19 04:51:47.882373 | orchestrator | 2026-02-19 04:51:47.882385 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:51:47.882404 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-19 04:51:47.882433 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:47.882455 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:51:47.882472 | orchestrator | 2026-02-19 04:51:47.882490 | orchestrator | 2026-02-19 04:51:47.882507 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:51:47.882523 | orchestrator | Thursday 19 February 2026 04:51:47 +0000 (0:00:00.891) 0:00:19.078 ***** 2026-02-19 04:51:47.882577 | orchestrator | =============================================================================== 2026-02-19 04:51:47.882670 | orchestrator | Aggregate test results step one ----------------------------------------- 1.81s 2026-02-19 04:51:47.882692 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.74s 2026-02-19 04:51:47.882708 | orchestrator | Write report file ------------------------------------------------------- 1.51s 2026-02-19 04:51:47.882720 | orchestrator | Gather status data ------------------------------------------------------ 1.46s 2026-02-19 04:51:47.882732 | orchestrator | Get container info ------------------------------------------------------ 1.10s 2026-02-19 04:51:47.882745 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2026-02-19 04:51:47.882757 | orchestrator | Print report file information ------------------------------------------- 0.89s 2026-02-19 04:51:47.882770 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-02-19 04:51:47.882782 | orchestrator | Set quorum test data ---------------------------------------------------- 0.56s 2026-02-19 04:51:47.882796 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-02-19 04:51:47.882825 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.48s 2026-02-19 04:51:47.882836 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2026-02-19 04:51:47.882846 | orchestrator | Aggregate test results step two ----------------------------------------- 0.34s 2026-02-19 04:51:47.882857 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2026-02-19 04:51:47.882867 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-02-19 04:51:47.882878 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-02-19 04:51:47.882888 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-02-19 04:51:47.882899 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-02-19 04:51:47.882909 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-02-19 04:51:47.882920 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2026-02-19 04:51:48.190825 | orchestrator | + osism validate ceph-mgrs 2026-02-19 04:52:19.536889 | orchestrator | 2026-02-19 04:52:19.537018 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-19 04:52:19.537049 | orchestrator | 2026-02-19 04:52:19.537068 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-19 04:52:19.537085 | orchestrator | Thursday 19 February 2026 04:52:04 +0000 (0:00:00.433) 0:00:00.433 ***** 2026-02-19 04:52:19.537106 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:19.537126 | orchestrator | 2026-02-19 04:52:19.537147 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-19 04:52:19.537163 | orchestrator | Thursday 19 February 2026 04:52:05 +0000 (0:00:00.893) 0:00:01.327 ***** 2026-02-19 04:52:19.537174 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:19.537185 | orchestrator | 2026-02-19 04:52:19.537196 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-19 04:52:19.537207 | orchestrator | Thursday 19 February 2026 04:52:06 +0000 (0:00:00.964) 0:00:02.292 ***** 2026-02-19 04:52:19.537218 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.537230 | orchestrator | 2026-02-19 04:52:19.537241 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-19 04:52:19.537252 | orchestrator | Thursday 19 February 2026 04:52:06 +0000 (0:00:00.120) 0:00:02.413 ***** 2026-02-19 04:52:19.537262 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.537273 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:52:19.537284 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:52:19.537295 | orchestrator | 2026-02-19 04:52:19.537306 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-19 04:52:19.537317 | orchestrator | Thursday 19 February 2026 04:52:07 +0000 (0:00:00.295) 0:00:02.708 ***** 2026-02-19 04:52:19.537352 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:52:19.537363 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.537374 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:52:19.537385 | orchestrator | 2026-02-19 04:52:19.537397 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-19 04:52:19.537410 | orchestrator | Thursday 19 February 2026 04:52:08 +0000 (0:00:00.992) 0:00:03.701 ***** 2026-02-19 04:52:19.537422 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.537435 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:52:19.537447 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:52:19.537459 | orchestrator | 2026-02-19 04:52:19.537472 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-19 04:52:19.537485 | orchestrator | Thursday 19 February 2026 04:52:08 +0000 (0:00:00.304) 0:00:04.005 ***** 2026-02-19 04:52:19.537498 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.537510 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:52:19.537520 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:52:19.537531 | orchestrator | 2026-02-19 04:52:19.537542 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-19 04:52:19.537552 | orchestrator | Thursday 19 February 2026 04:52:08 +0000 (0:00:00.484) 0:00:04.490 ***** 2026-02-19 04:52:19.537563 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.537574 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:52:19.537584 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:52:19.537595 | orchestrator | 2026-02-19 04:52:19.537637 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-19 04:52:19.537649 | orchestrator | Thursday 19 February 2026 04:52:09 +0000 (0:00:00.337) 0:00:04.827 ***** 2026-02-19 04:52:19.537660 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.537671 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:52:19.537681 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:52:19.537692 | orchestrator | 2026-02-19 04:52:19.537703 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-19 04:52:19.537714 | orchestrator | Thursday 19 February 2026 04:52:09 +0000 (0:00:00.287) 0:00:05.115 ***** 2026-02-19 04:52:19.537724 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.537735 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:52:19.537746 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:52:19.537756 | orchestrator | 2026-02-19 04:52:19.537767 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-19 04:52:19.537778 | orchestrator | Thursday 19 February 2026 04:52:10 +0000 (0:00:00.477) 0:00:05.592 ***** 2026-02-19 04:52:19.537788 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.537799 | orchestrator | 2026-02-19 04:52:19.537810 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-19 04:52:19.537821 | orchestrator | Thursday 19 February 2026 04:52:10 +0000 (0:00:00.260) 0:00:05.853 ***** 2026-02-19 04:52:19.537831 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.537842 | orchestrator | 2026-02-19 04:52:19.537870 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-19 04:52:19.537881 | orchestrator | Thursday 19 February 2026 04:52:10 +0000 (0:00:00.276) 0:00:06.130 ***** 2026-02-19 04:52:19.537892 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.537902 | orchestrator | 2026-02-19 04:52:19.537913 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:19.537924 | orchestrator | Thursday 19 February 2026 04:52:10 +0000 (0:00:00.242) 0:00:06.373 ***** 2026-02-19 04:52:19.537935 | orchestrator | 2026-02-19 04:52:19.537945 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:19.537956 | orchestrator | Thursday 19 February 2026 04:52:10 +0000 (0:00:00.078) 0:00:06.451 ***** 2026-02-19 04:52:19.537967 | orchestrator | 2026-02-19 04:52:19.537978 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:19.537988 | orchestrator | Thursday 19 February 2026 04:52:10 +0000 (0:00:00.073) 0:00:06.525 ***** 2026-02-19 04:52:19.538009 | orchestrator | 2026-02-19 04:52:19.538084 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-19 04:52:19.538096 | orchestrator | Thursday 19 February 2026 04:52:11 +0000 (0:00:00.092) 0:00:06.617 ***** 2026-02-19 04:52:19.538107 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.538117 | orchestrator | 2026-02-19 04:52:19.538128 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-19 04:52:19.538139 | orchestrator | Thursday 19 February 2026 04:52:11 +0000 (0:00:00.245) 0:00:06.863 ***** 2026-02-19 04:52:19.538150 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.538160 | orchestrator | 2026-02-19 04:52:19.538190 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-19 04:52:19.538202 | orchestrator | Thursday 19 February 2026 04:52:11 +0000 (0:00:00.260) 0:00:07.123 ***** 2026-02-19 04:52:19.538213 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.538224 | orchestrator | 2026-02-19 04:52:19.538235 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-19 04:52:19.538246 | orchestrator | Thursday 19 February 2026 04:52:11 +0000 (0:00:00.129) 0:00:07.253 ***** 2026-02-19 04:52:19.538256 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:52:19.538267 | orchestrator | 2026-02-19 04:52:19.538278 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-19 04:52:19.538288 | orchestrator | Thursday 19 February 2026 04:52:13 +0000 (0:00:02.088) 0:00:09.341 ***** 2026-02-19 04:52:19.538299 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.538310 | orchestrator | 2026-02-19 04:52:19.538338 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-19 04:52:19.538350 | orchestrator | Thursday 19 February 2026 04:52:14 +0000 (0:00:00.443) 0:00:09.784 ***** 2026-02-19 04:52:19.538360 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.538371 | orchestrator | 2026-02-19 04:52:19.538382 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-19 04:52:19.538393 | orchestrator | Thursday 19 February 2026 04:52:14 +0000 (0:00:00.346) 0:00:10.131 ***** 2026-02-19 04:52:19.538403 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.538414 | orchestrator | 2026-02-19 04:52:19.538425 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-19 04:52:19.538436 | orchestrator | Thursday 19 February 2026 04:52:14 +0000 (0:00:00.141) 0:00:10.272 ***** 2026-02-19 04:52:19.538446 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:52:19.538457 | orchestrator | 2026-02-19 04:52:19.538467 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-19 04:52:19.538478 | orchestrator | Thursday 19 February 2026 04:52:14 +0000 (0:00:00.144) 0:00:10.417 ***** 2026-02-19 04:52:19.538489 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:19.538499 | orchestrator | 2026-02-19 04:52:19.538510 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-19 04:52:19.538520 | orchestrator | Thursday 19 February 2026 04:52:15 +0000 (0:00:00.278) 0:00:10.696 ***** 2026-02-19 04:52:19.538531 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:52:19.538542 | orchestrator | 2026-02-19 04:52:19.538552 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-19 04:52:19.538563 | orchestrator | Thursday 19 February 2026 04:52:15 +0000 (0:00:00.278) 0:00:10.974 ***** 2026-02-19 04:52:19.538574 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:19.538585 | orchestrator | 2026-02-19 04:52:19.538596 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-19 04:52:19.538630 | orchestrator | Thursday 19 February 2026 04:52:16 +0000 (0:00:01.339) 0:00:12.314 ***** 2026-02-19 04:52:19.538642 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:19.538652 | orchestrator | 2026-02-19 04:52:19.538663 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-19 04:52:19.538674 | orchestrator | Thursday 19 February 2026 04:52:17 +0000 (0:00:00.267) 0:00:12.582 ***** 2026-02-19 04:52:19.538695 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:19.538706 | orchestrator | 2026-02-19 04:52:19.538717 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:19.538727 | orchestrator | Thursday 19 February 2026 04:52:17 +0000 (0:00:00.273) 0:00:12.855 ***** 2026-02-19 04:52:19.538738 | orchestrator | 2026-02-19 04:52:19.538749 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:19.538759 | orchestrator | Thursday 19 February 2026 04:52:17 +0000 (0:00:00.070) 0:00:12.926 ***** 2026-02-19 04:52:19.538770 | orchestrator | 2026-02-19 04:52:19.538781 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:19.538791 | orchestrator | Thursday 19 February 2026 04:52:17 +0000 (0:00:00.071) 0:00:12.997 ***** 2026-02-19 04:52:19.538802 | orchestrator | 2026-02-19 04:52:19.538813 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-19 04:52:19.538824 | orchestrator | Thursday 19 February 2026 04:52:17 +0000 (0:00:00.295) 0:00:13.293 ***** 2026-02-19 04:52:19.538834 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:19.538845 | orchestrator | 2026-02-19 04:52:19.538856 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-19 04:52:19.538866 | orchestrator | Thursday 19 February 2026 04:52:19 +0000 (0:00:01.361) 0:00:14.654 ***** 2026-02-19 04:52:19.538877 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-19 04:52:19.538888 | orchestrator |  "msg": [ 2026-02-19 04:52:19.538899 | orchestrator |  "Validator run completed.", 2026-02-19 04:52:19.538915 | orchestrator |  "You can find the report file here:", 2026-02-19 04:52:19.538926 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-19T04:52:05+00:00-report.json", 2026-02-19 04:52:19.538938 | orchestrator |  "on the following host:", 2026-02-19 04:52:19.538949 | orchestrator |  "testbed-manager" 2026-02-19 04:52:19.538959 | orchestrator |  ] 2026-02-19 04:52:19.538971 | orchestrator | } 2026-02-19 04:52:19.538982 | orchestrator | 2026-02-19 04:52:19.538992 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:52:19.539004 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-19 04:52:19.539016 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:52:19.539035 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:52:19.879909 | orchestrator | 2026-02-19 04:52:19.879996 | orchestrator | 2026-02-19 04:52:19.880008 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:52:19.880018 | orchestrator | Thursday 19 February 2026 04:52:19 +0000 (0:00:00.424) 0:00:15.079 ***** 2026-02-19 04:52:19.880026 | orchestrator | =============================================================================== 2026-02-19 04:52:19.880034 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.09s 2026-02-19 04:52:19.880042 | orchestrator | Write report file ------------------------------------------------------- 1.36s 2026-02-19 04:52:19.880050 | orchestrator | Aggregate test results step one ----------------------------------------- 1.34s 2026-02-19 04:52:19.880058 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2026-02-19 04:52:19.880066 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2026-02-19 04:52:19.880073 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2026-02-19 04:52:19.880081 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2026-02-19 04:52:19.880089 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.48s 2026-02-19 04:52:19.880119 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.44s 2026-02-19 04:52:19.880128 | orchestrator | Flush handlers ---------------------------------------------------------- 0.44s 2026-02-19 04:52:19.880135 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-02-19 04:52:19.880143 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2026-02-19 04:52:19.880151 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-02-19 04:52:19.880158 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-02-19 04:52:19.880166 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-19 04:52:19.880174 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2026-02-19 04:52:19.880181 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-02-19 04:52:19.880205 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-02-19 04:52:19.880221 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-02-19 04:52:19.880229 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-02-19 04:52:20.188519 | orchestrator | + osism validate ceph-osds 2026-02-19 04:52:41.634008 | orchestrator | 2026-02-19 04:52:41.634144 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-19 04:52:41.634156 | orchestrator | 2026-02-19 04:52:41.634164 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-19 04:52:41.634171 | orchestrator | Thursday 19 February 2026 04:52:36 +0000 (0:00:00.451) 0:00:00.452 ***** 2026-02-19 04:52:41.634179 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:41.634186 | orchestrator | 2026-02-19 04:52:41.634193 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-19 04:52:41.634200 | orchestrator | Thursday 19 February 2026 04:52:37 +0000 (0:00:00.872) 0:00:01.324 ***** 2026-02-19 04:52:41.634207 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:41.634214 | orchestrator | 2026-02-19 04:52:41.634221 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-19 04:52:41.634227 | orchestrator | Thursday 19 February 2026 04:52:38 +0000 (0:00:00.550) 0:00:01.874 ***** 2026-02-19 04:52:41.634234 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 04:52:41.634241 | orchestrator | 2026-02-19 04:52:41.634247 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-19 04:52:41.634254 | orchestrator | Thursday 19 February 2026 04:52:39 +0000 (0:00:00.746) 0:00:02.621 ***** 2026-02-19 04:52:41.634261 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:41.634269 | orchestrator | 2026-02-19 04:52:41.634276 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-19 04:52:41.634283 | orchestrator | Thursday 19 February 2026 04:52:39 +0000 (0:00:00.122) 0:00:02.744 ***** 2026-02-19 04:52:41.634290 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:41.634296 | orchestrator | 2026-02-19 04:52:41.634303 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-19 04:52:41.634310 | orchestrator | Thursday 19 February 2026 04:52:39 +0000 (0:00:00.141) 0:00:02.885 ***** 2026-02-19 04:52:41.634317 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:41.634323 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:52:41.634330 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:52:41.634336 | orchestrator | 2026-02-19 04:52:41.634356 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-19 04:52:41.634363 | orchestrator | Thursday 19 February 2026 04:52:39 +0000 (0:00:00.323) 0:00:03.209 ***** 2026-02-19 04:52:41.634370 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:41.634376 | orchestrator | 2026-02-19 04:52:41.634383 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-19 04:52:41.634407 | orchestrator | Thursday 19 February 2026 04:52:39 +0000 (0:00:00.163) 0:00:03.372 ***** 2026-02-19 04:52:41.634414 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:41.634420 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:41.634427 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:41.634433 | orchestrator | 2026-02-19 04:52:41.634440 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-19 04:52:41.634447 | orchestrator | Thursday 19 February 2026 04:52:40 +0000 (0:00:00.339) 0:00:03.712 ***** 2026-02-19 04:52:41.634453 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:41.634460 | orchestrator | 2026-02-19 04:52:41.634466 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-19 04:52:41.634473 | orchestrator | Thursday 19 February 2026 04:52:41 +0000 (0:00:00.823) 0:00:04.536 ***** 2026-02-19 04:52:41.634490 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:41.634497 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:41.634503 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:41.634510 | orchestrator | 2026-02-19 04:52:41.634516 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-19 04:52:41.634523 | orchestrator | Thursday 19 February 2026 04:52:41 +0000 (0:00:00.300) 0:00:04.836 ***** 2026-02-19 04:52:41.634532 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3212ed1ec925ca04b5f9cb24db30473ab0bd37a13b25182a55280695b79ca6df', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-19 04:52:41.634542 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e475cba6046c823b68b22c8bdd39ac80f2819cf1485e416d44d371a85935d45', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-19 04:52:41.634550 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e4a2c9e643673861ac61a10fb07c470b29a8d3c403abc9caacbd2244b26fbd2c', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-19 04:52:41.634558 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1b333b62ff23438d89bca355af259636671b7e648ee7616d29754b6a65bfb206', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-19 04:52:41.634566 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5be3ed58e59379ad50d1d2bf2f18d1e67abcd761cd3afc23ac5f7a6bcf5c910d', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-19 04:52:41.634593 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a79e03a59ac5f39d738cbb6d61f080f78979da581fab43010c3bbe9d25a495e5', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-19 04:52:41.634602 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8b2c872c26ddf76b3dff1196702e4eb3d462b1f9dce3d9bca5a2f9acdca6e1b2', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-19 04:52:41.634630 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1440548202d8bf173a8201e485d3e8476fbd605644fcc1030770d99b28f06cb1', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-19 04:52:41.634638 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e9e545301252eee5662fd97c2cc514da1641df699a7c6048ab8fd18a1467294', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.634654 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2e784b641d4d969b927f9ad6a46611cbc4ea12b237cdc0f6f47f6b93f0d7599f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.634662 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4b6e2daa66640bb4db0a44c0eddde97904bb400e8613e78fbe53444ced72cb75', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.634671 | orchestrator | ok: [testbed-node-3] => (item={'id': '5a20eb2bcdc56167d10e9e3ddf385f37bb2eaddb9174c2c94556f9035a134c5e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-19 04:52:41.634680 | orchestrator | ok: [testbed-node-3] => (item={'id': '50ab9949d292037824962d182aa1fe72bcebb8ee41546b00e717dd6d46a0d6b1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-19 04:52:41.634689 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e5b3b0d3f535a7e82efe68d33d64b5b48e8f3d25f569ab8ee64e1d7c077306a5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.634697 | orchestrator | skipping: [testbed-node-3] => (item={'id': '54650ee201f620f8b9d252e70a61e561b3d0257a87e389480a5fbb4ef71ce4fb', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-19 04:52:41.634705 | orchestrator | skipping: [testbed-node-3] => (item={'id': '545f2356144f9fb913d8ed8300b5f709e45ff37f9980f9f5ef7bc13e92e9bc98', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-19 04:52:41.634713 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3e2f7fc395fa90668a693e1ff88f3c86025d1e13d6041afc695bf0219c637ffc', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:41.634721 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9e5c2c30381e3c10ac170c700c49201422917ec708c41b98fd7fc350782859ca', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:41.634730 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b92361fbbe1e2dc1e74f5089c083de3dabbc207423a369d7051c6d42b961ca4d', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:41.634738 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f8b40a63cda13444462329402a115dff7818712c9e9a4aa9ee5ac56a97fc3b68', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-19 04:52:41.634750 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e4ad4548a30a4ebd549ddedcba4623840d578a2857161845f0c04e9fc0352801', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-19 04:52:41.891858 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b0f4e2e4412fe89f6bfe402ddb52bd290c69b41b62a9ee143e7a27ffb9d3ea66', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-19 04:52:41.891987 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a1638f2173f58902dded47e93ce867f08d7643dc8281cd743d6cc28e49adad70', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-19 04:52:41.892019 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5146bb2530cc6e460f81cf79f3ce06e418d95bef5329a82b20713b52e0f99eb3', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-19 04:52:41.892031 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6b17b708d6e3c1bb8e7766dddf67adcdb78c488ac204f9bb77945e0a9cf2f6d9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-19 04:52:41.892056 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a55ed1ba7f8ccf220029d8f3bda00166d72347fa6e2936206f090975008bbc63', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-19 04:52:41.892064 | orchestrator | skipping: [testbed-node-4] => (item={'id': '92779d0e5763c1df38baae606a0eb3e2fbc297c9ad1c453c3d96bc886e60469f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-19 04:52:41.892072 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ab57db7d630ba0315a9d7fcc426c0b2304a7a65294211eb389b1b5b44f65a7c3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.892081 | orchestrator | skipping: [testbed-node-4] => (item={'id': '931b675504b140c5b3d13133327de0de72188a1a32fb65f6e60dcee28b807f99', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.892090 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e2a56d9b120fbc6db23f7abb757c51ff41108b809346109b5ea79db6afcc6991', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.892099 | orchestrator | ok: [testbed-node-4] => (item={'id': '2dd086a497ac6547c0ed656dfeed7bb568da3387d31b77131690e4ce8bdb3612', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-19 04:52:41.892108 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e8fcb43bd64360587446af4ec57dad85dc7ff7f13d4e20daf916a90ade0f3896', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-19 04:52:41.892116 | orchestrator | skipping: [testbed-node-4] => (item={'id': '262a1f24dd1774649c4c459c8d6895a88787b5a14d1cf29bef6587dd3fe5237c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.892125 | orchestrator | skipping: [testbed-node-4] => (item={'id': '659b73d4044cb34939d82379da8d99abc64a4252a40d3e31f427069e56ba7a1b', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-19 04:52:41.892133 | orchestrator | skipping: [testbed-node-4] => (item={'id': '92e98f20a8c1e020621b58949801b12c2f31d92eadae64e5d78165b1c6aa55c7', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-19 04:52:41.892156 | orchestrator | skipping: [testbed-node-4] => (item={'id': '66ffcbd39554d386a1b8fdd3c85df4cd66435b45082435365c9dd4d774198c1d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:41.892171 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd716b564122853047c364be80b1699674505fc4d60278ed878b1540e20fd932', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:41.892180 | orchestrator | skipping: [testbed-node-4] => (item={'id': '08a7042e250330bf24faf524d06d948f5c343db5399898e6654c75b7544d873f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:41.892188 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8cad42b1fb4afe117172864c13f10bd8f8fcc138713f6254f411314a36c1f294', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-19 04:52:41.892196 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c464e9077cbfa7cb82a7e02a4244a9b63e3f6cedaf63b82ffcf56141539e4c71', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-19 04:52:41.892208 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f54d0d3a7e2d8781cb4cf89250a146012e189112100be264db0008d205932d77', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-19 04:52:41.892216 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ec6d2c4604bce42ec34ed27c815352e6781cc6d987055519aa4e38e2266d61cd', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-19 04:52:41.892224 | orchestrator | skipping: [testbed-node-5] => (item={'id': '78248edb18996539a54bb36bb5438102f2650db51c8f4aa8aaa5a4d11444defd', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-19 04:52:41.892232 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dfa02c5439bed5a9b28f86bdc98f554bf5b2f825de1361128171570a72f08d3b', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-19 04:52:41.892240 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fa1505d9c69c134d4477e7a9338d111ef82f7df0fd255f901ec439298b3c7b31', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-19 04:52:41.892248 | orchestrator | skipping: [testbed-node-5] => (item={'id': '039454535dd08125a11b8f15e4cf855c2a314f858a9914574a57e03f3867aea0', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-19 04:52:41.892256 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25f17247bf683dc8186db9aea23a5a097964cb14cd199678a88fdc9a64df1440', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.892264 | orchestrator | skipping: [testbed-node-5] => (item={'id': '607a8a8c42ce1a5e95cb2c2036978ff7bd50865900ea41a8324a7433585759c5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.892272 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5382fc3a736c42b685a265df6ecce3ff5fb12002897d5da1e40166d9b9132c58', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:41.892286 | orchestrator | ok: [testbed-node-5] => (item={'id': 'e6fb2fd98ddd99171d962da5aaf37277fb578b827779d56ff0b68373da8cebb1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-19 04:52:41.892300 | orchestrator | ok: [testbed-node-5] => (item={'id': 'aba2eae4066a031fd4dd37802e95fd2593554e9283a2dfe1a4d2daba74d6b2c4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-19 04:52:53.436050 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3ce5ef9002390d72340d4e78842e322cfd4a35e88051bc28161322ad6f9f6bc9', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-19 04:52:53.436136 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f140dfcbc1b3ed79c37638b57924dd5dc0ed8944c79cd7810a50b441b02da6ef', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-19 04:52:53.436149 | orchestrator | skipping: [testbed-node-5] => (item={'id': '55d1ce728f29318d25fcd49b948f83d2849aa0f5aa4e874a02dabc57458ec8a5', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-19 04:52:53.436156 | orchestrator | skipping: [testbed-node-5] => (item={'id': '55302f3ae55447bac005287664dd7660fcfb60ed6830e063e5b86bfe510c74e1', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:53.436175 | orchestrator | skipping: [testbed-node-5] => (item={'id': '31f39b1cbf5b75c1936be16b841d97d39e2d0521767913dba1a04ee35b3d4c08', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:53.436182 | orchestrator | skipping: [testbed-node-5] => (item={'id': '00397dee237829647afe8a5f8f149af6cf24c94f14131cf37c5e789b2461baf5', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-19 04:52:53.436187 | orchestrator | 2026-02-19 04:52:53.436194 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-19 04:52:53.436205 | orchestrator | Thursday 19 February 2026 04:52:41 +0000 (0:00:00.502) 0:00:05.339 ***** 2026-02-19 04:52:53.436213 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.436224 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:53.436232 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:53.436241 | orchestrator | 2026-02-19 04:52:53.436251 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-19 04:52:53.436260 | orchestrator | Thursday 19 February 2026 04:52:42 +0000 (0:00:00.287) 0:00:05.626 ***** 2026-02-19 04:52:53.436269 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436280 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:52:53.436290 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:52:53.436296 | orchestrator | 2026-02-19 04:52:53.436302 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-19 04:52:53.436308 | orchestrator | Thursday 19 February 2026 04:52:42 +0000 (0:00:00.476) 0:00:06.103 ***** 2026-02-19 04:52:53.436313 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.436319 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:53.436324 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:53.436330 | orchestrator | 2026-02-19 04:52:53.436335 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-19 04:52:53.436341 | orchestrator | Thursday 19 February 2026 04:52:42 +0000 (0:00:00.306) 0:00:06.409 ***** 2026-02-19 04:52:53.436346 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.436351 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:53.436357 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:53.436377 | orchestrator | 2026-02-19 04:52:53.436383 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-19 04:52:53.436388 | orchestrator | Thursday 19 February 2026 04:52:43 +0000 (0:00:00.310) 0:00:06.720 ***** 2026-02-19 04:52:53.436394 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-19 04:52:53.436400 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-19 04:52:53.436406 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436411 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-19 04:52:53.436417 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-19 04:52:53.436422 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:52:53.436428 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-19 04:52:53.436433 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-19 04:52:53.436439 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:52:53.436444 | orchestrator | 2026-02-19 04:52:53.436450 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-19 04:52:53.436455 | orchestrator | Thursday 19 February 2026 04:52:43 +0000 (0:00:00.342) 0:00:07.062 ***** 2026-02-19 04:52:53.436461 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.436466 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:53.436471 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:53.436477 | orchestrator | 2026-02-19 04:52:53.436482 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-19 04:52:53.436488 | orchestrator | Thursday 19 February 2026 04:52:44 +0000 (0:00:00.518) 0:00:07.581 ***** 2026-02-19 04:52:53.436493 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436511 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:52:53.436517 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:52:53.436523 | orchestrator | 2026-02-19 04:52:53.436528 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-19 04:52:53.436534 | orchestrator | Thursday 19 February 2026 04:52:44 +0000 (0:00:00.334) 0:00:07.915 ***** 2026-02-19 04:52:53.436539 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436544 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:52:53.436550 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:52:53.436555 | orchestrator | 2026-02-19 04:52:53.436560 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-19 04:52:53.436566 | orchestrator | Thursday 19 February 2026 04:52:44 +0000 (0:00:00.318) 0:00:08.234 ***** 2026-02-19 04:52:53.436571 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.436577 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:53.436582 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:53.436587 | orchestrator | 2026-02-19 04:52:53.436592 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-19 04:52:53.436598 | orchestrator | Thursday 19 February 2026 04:52:45 +0000 (0:00:00.305) 0:00:08.540 ***** 2026-02-19 04:52:53.436603 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436609 | orchestrator | 2026-02-19 04:52:53.436655 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-19 04:52:53.436662 | orchestrator | Thursday 19 February 2026 04:52:45 +0000 (0:00:00.686) 0:00:09.226 ***** 2026-02-19 04:52:53.436667 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436673 | orchestrator | 2026-02-19 04:52:53.436678 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-19 04:52:53.436684 | orchestrator | Thursday 19 February 2026 04:52:46 +0000 (0:00:00.303) 0:00:09.530 ***** 2026-02-19 04:52:53.436689 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436696 | orchestrator | 2026-02-19 04:52:53.436706 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:53.436835 | orchestrator | Thursday 19 February 2026 04:52:46 +0000 (0:00:00.263) 0:00:09.793 ***** 2026-02-19 04:52:53.436848 | orchestrator | 2026-02-19 04:52:53.436854 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:53.436859 | orchestrator | Thursday 19 February 2026 04:52:46 +0000 (0:00:00.068) 0:00:09.862 ***** 2026-02-19 04:52:53.436865 | orchestrator | 2026-02-19 04:52:53.436871 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:52:53.436876 | orchestrator | Thursday 19 February 2026 04:52:46 +0000 (0:00:00.068) 0:00:09.931 ***** 2026-02-19 04:52:53.436882 | orchestrator | 2026-02-19 04:52:53.436887 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-19 04:52:53.436892 | orchestrator | Thursday 19 February 2026 04:52:46 +0000 (0:00:00.071) 0:00:10.002 ***** 2026-02-19 04:52:53.436897 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436902 | orchestrator | 2026-02-19 04:52:53.436907 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-19 04:52:53.436911 | orchestrator | Thursday 19 February 2026 04:52:46 +0000 (0:00:00.303) 0:00:10.306 ***** 2026-02-19 04:52:53.436916 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.436921 | orchestrator | 2026-02-19 04:52:53.436926 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-19 04:52:53.436930 | orchestrator | Thursday 19 February 2026 04:52:47 +0000 (0:00:00.250) 0:00:10.556 ***** 2026-02-19 04:52:53.436935 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.436940 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:53.436945 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:53.436950 | orchestrator | 2026-02-19 04:52:53.436955 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-19 04:52:53.436959 | orchestrator | Thursday 19 February 2026 04:52:47 +0000 (0:00:00.305) 0:00:10.861 ***** 2026-02-19 04:52:53.436964 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.436969 | orchestrator | 2026-02-19 04:52:53.436974 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-19 04:52:53.436978 | orchestrator | Thursday 19 February 2026 04:52:48 +0000 (0:00:00.651) 0:00:11.513 ***** 2026-02-19 04:52:53.436983 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 04:52:53.436988 | orchestrator | 2026-02-19 04:52:53.436993 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-19 04:52:53.436998 | orchestrator | Thursday 19 February 2026 04:52:49 +0000 (0:00:01.712) 0:00:13.225 ***** 2026-02-19 04:52:53.437002 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.437007 | orchestrator | 2026-02-19 04:52:53.437012 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-19 04:52:53.437016 | orchestrator | Thursday 19 February 2026 04:52:49 +0000 (0:00:00.125) 0:00:13.350 ***** 2026-02-19 04:52:53.437021 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.437026 | orchestrator | 2026-02-19 04:52:53.437031 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-19 04:52:53.437035 | orchestrator | Thursday 19 February 2026 04:52:50 +0000 (0:00:00.313) 0:00:13.663 ***** 2026-02-19 04:52:53.437040 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:52:53.437045 | orchestrator | 2026-02-19 04:52:53.437049 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-19 04:52:53.437054 | orchestrator | Thursday 19 February 2026 04:52:50 +0000 (0:00:00.119) 0:00:13.783 ***** 2026-02-19 04:52:53.437059 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.437064 | orchestrator | 2026-02-19 04:52:53.437069 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-19 04:52:53.437074 | orchestrator | Thursday 19 February 2026 04:52:50 +0000 (0:00:00.125) 0:00:13.909 ***** 2026-02-19 04:52:53.437078 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:52:53.437083 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:52:53.437088 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:52:53.437098 | orchestrator | 2026-02-19 04:52:53.437103 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-19 04:52:53.437108 | orchestrator | Thursday 19 February 2026 04:52:50 +0000 (0:00:00.323) 0:00:14.232 ***** 2026-02-19 04:52:53.437113 | orchestrator | changed: [testbed-node-3] 2026-02-19 04:52:53.437118 | orchestrator | changed: [testbed-node-4] 2026-02-19 04:52:53.437122 | orchestrator | changed: [testbed-node-5] 2026-02-19 04:53:03.719722 | orchestrator | 2026-02-19 04:53:03.719868 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-19 04:53:03.719888 | orchestrator | Thursday 19 February 2026 04:52:53 +0000 (0:00:02.655) 0:00:16.887 ***** 2026-02-19 04:53:03.719901 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:53:03.719913 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:53:03.719923 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:53:03.719934 | orchestrator | 2026-02-19 04:53:03.719945 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-19 04:53:03.719956 | orchestrator | Thursday 19 February 2026 04:52:53 +0000 (0:00:00.317) 0:00:17.205 ***** 2026-02-19 04:53:03.719967 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:53:03.719979 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:53:03.719996 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:53:03.720015 | orchestrator | 2026-02-19 04:53:03.720033 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-19 04:53:03.720052 | orchestrator | Thursday 19 February 2026 04:52:54 +0000 (0:00:00.497) 0:00:17.702 ***** 2026-02-19 04:53:03.720072 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:53:03.720092 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:53:03.720112 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:53:03.720131 | orchestrator | 2026-02-19 04:53:03.720149 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-19 04:53:03.720167 | orchestrator | Thursday 19 February 2026 04:52:54 +0000 (0:00:00.314) 0:00:18.017 ***** 2026-02-19 04:53:03.720179 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:53:03.720191 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:53:03.720204 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:53:03.720222 | orchestrator | 2026-02-19 04:53:03.720241 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-19 04:53:03.720267 | orchestrator | Thursday 19 February 2026 04:52:55 +0000 (0:00:00.559) 0:00:18.577 ***** 2026-02-19 04:53:03.720287 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:53:03.720307 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:53:03.720327 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:53:03.720344 | orchestrator | 2026-02-19 04:53:03.720358 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-19 04:53:03.720374 | orchestrator | Thursday 19 February 2026 04:52:55 +0000 (0:00:00.309) 0:00:18.886 ***** 2026-02-19 04:53:03.720394 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:53:03.720413 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:53:03.720432 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:53:03.720451 | orchestrator | 2026-02-19 04:53:03.720471 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-19 04:53:03.720490 | orchestrator | Thursday 19 February 2026 04:52:55 +0000 (0:00:00.300) 0:00:19.186 ***** 2026-02-19 04:53:03.720511 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:53:03.720530 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:53:03.720548 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:53:03.720561 | orchestrator | 2026-02-19 04:53:03.720574 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-19 04:53:03.720585 | orchestrator | Thursday 19 February 2026 04:52:56 +0000 (0:00:00.522) 0:00:19.709 ***** 2026-02-19 04:53:03.720596 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:53:03.720607 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:53:03.720644 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:53:03.720658 | orchestrator | 2026-02-19 04:53:03.720670 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-19 04:53:03.720706 | orchestrator | Thursday 19 February 2026 04:52:57 +0000 (0:00:00.762) 0:00:20.471 ***** 2026-02-19 04:53:03.720717 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:53:03.720728 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:53:03.720741 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:53:03.720760 | orchestrator | 2026-02-19 04:53:03.720807 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-19 04:53:03.720827 | orchestrator | Thursday 19 February 2026 04:52:57 +0000 (0:00:00.330) 0:00:20.801 ***** 2026-02-19 04:53:03.720847 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:53:03.720865 | orchestrator | skipping: [testbed-node-4] 2026-02-19 04:53:03.720884 | orchestrator | skipping: [testbed-node-5] 2026-02-19 04:53:03.720895 | orchestrator | 2026-02-19 04:53:03.720907 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-19 04:53:03.720917 | orchestrator | Thursday 19 February 2026 04:52:57 +0000 (0:00:00.327) 0:00:21.129 ***** 2026-02-19 04:53:03.720928 | orchestrator | ok: [testbed-node-3] 2026-02-19 04:53:03.720939 | orchestrator | ok: [testbed-node-4] 2026-02-19 04:53:03.720949 | orchestrator | ok: [testbed-node-5] 2026-02-19 04:53:03.720960 | orchestrator | 2026-02-19 04:53:03.720970 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-19 04:53:03.720981 | orchestrator | Thursday 19 February 2026 04:52:58 +0000 (0:00:00.524) 0:00:21.653 ***** 2026-02-19 04:53:03.720992 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 04:53:03.721003 | orchestrator | 2026-02-19 04:53:03.721014 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-19 04:53:03.721024 | orchestrator | Thursday 19 February 2026 04:52:58 +0000 (0:00:00.305) 0:00:21.959 ***** 2026-02-19 04:53:03.721035 | orchestrator | skipping: [testbed-node-3] 2026-02-19 04:53:03.721046 | orchestrator | 2026-02-19 04:53:03.721056 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-19 04:53:03.721067 | orchestrator | Thursday 19 February 2026 04:52:58 +0000 (0:00:00.256) 0:00:22.215 ***** 2026-02-19 04:53:03.721078 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 04:53:03.721089 | orchestrator | 2026-02-19 04:53:03.721099 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-19 04:53:03.721117 | orchestrator | Thursday 19 February 2026 04:53:00 +0000 (0:00:01.715) 0:00:23.931 ***** 2026-02-19 04:53:03.721135 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 04:53:03.721153 | orchestrator | 2026-02-19 04:53:03.721173 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-19 04:53:03.721191 | orchestrator | Thursday 19 February 2026 04:53:00 +0000 (0:00:00.283) 0:00:24.215 ***** 2026-02-19 04:53:03.721211 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 04:53:03.721230 | orchestrator | 2026-02-19 04:53:03.721274 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:53:03.721287 | orchestrator | Thursday 19 February 2026 04:53:01 +0000 (0:00:00.270) 0:00:24.485 ***** 2026-02-19 04:53:03.721298 | orchestrator | 2026-02-19 04:53:03.721309 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:53:03.721319 | orchestrator | Thursday 19 February 2026 04:53:01 +0000 (0:00:00.077) 0:00:24.563 ***** 2026-02-19 04:53:03.721330 | orchestrator | 2026-02-19 04:53:03.721341 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-19 04:53:03.721352 | orchestrator | Thursday 19 February 2026 04:53:01 +0000 (0:00:00.070) 0:00:24.633 ***** 2026-02-19 04:53:03.721363 | orchestrator | 2026-02-19 04:53:03.721374 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-19 04:53:03.721384 | orchestrator | Thursday 19 February 2026 04:53:01 +0000 (0:00:00.076) 0:00:24.709 ***** 2026-02-19 04:53:03.721395 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-19 04:53:03.721406 | orchestrator | 2026-02-19 04:53:03.721416 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-19 04:53:03.721438 | orchestrator | Thursday 19 February 2026 04:53:02 +0000 (0:00:01.513) 0:00:26.223 ***** 2026-02-19 04:53:03.721448 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-19 04:53:03.721459 | orchestrator |  "msg": [ 2026-02-19 04:53:03.721474 | orchestrator |  "Validator run completed.", 2026-02-19 04:53:03.721493 | orchestrator |  "You can find the report file here:", 2026-02-19 04:53:03.721512 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-19T04:52:37+00:00-report.json", 2026-02-19 04:53:03.721540 | orchestrator |  "on the following host:", 2026-02-19 04:53:03.721560 | orchestrator |  "testbed-manager" 2026-02-19 04:53:03.721580 | orchestrator |  ] 2026-02-19 04:53:03.721598 | orchestrator | } 2026-02-19 04:53:03.721667 | orchestrator | 2026-02-19 04:53:03.721683 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:53:03.721695 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 04:53:03.721707 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-19 04:53:03.721718 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-19 04:53:03.721729 | orchestrator | 2026-02-19 04:53:03.721740 | orchestrator | 2026-02-19 04:53:03.721750 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:53:03.721761 | orchestrator | Thursday 19 February 2026 04:53:03 +0000 (0:00:00.620) 0:00:26.844 ***** 2026-02-19 04:53:03.721772 | orchestrator | =============================================================================== 2026-02-19 04:53:03.721783 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.66s 2026-02-19 04:53:03.721793 | orchestrator | Aggregate test results step one ----------------------------------------- 1.72s 2026-02-19 04:53:03.721804 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.71s 2026-02-19 04:53:03.721815 | orchestrator | Write report file ------------------------------------------------------- 1.51s 2026-02-19 04:53:03.721825 | orchestrator | Get timestamp for report file ------------------------------------------- 0.87s 2026-02-19 04:53:03.721860 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.82s 2026-02-19 04:53:03.721881 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2026-02-19 04:53:03.721900 | orchestrator | Create report output directory ------------------------------------------ 0.75s 2026-02-19 04:53:03.721918 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2026-02-19 04:53:03.721937 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.65s 2026-02-19 04:53:03.721956 | orchestrator | Print report file information ------------------------------------------- 0.62s 2026-02-19 04:53:03.721974 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.56s 2026-02-19 04:53:03.721989 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.55s 2026-02-19 04:53:03.722000 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.52s 2026-02-19 04:53:03.722010 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-02-19 04:53:03.722142 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.52s 2026-02-19 04:53:03.722154 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2026-02-19 04:53:03.722165 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.50s 2026-02-19 04:53:03.722176 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.48s 2026-02-19 04:53:03.722186 | orchestrator | Get list of ceph-osd containers that are not running -------------------- 0.34s 2026-02-19 04:53:04.014724 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-19 04:53:04.024746 | orchestrator | + set -e 2026-02-19 04:53:04.024849 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 04:53:04.024869 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 04:53:04.024884 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 04:53:04.024898 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 04:53:04.024912 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 04:53:04.024934 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 04:53:04.024949 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 04:53:04.024965 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 04:53:04.024980 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 04:53:04.025002 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 04:53:04.025023 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 04:53:04.025037 | orchestrator | ++ export ARA=false 2026-02-19 04:53:04.025053 | orchestrator | ++ ARA=false 2026-02-19 04:53:04.025068 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 04:53:04.025084 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 04:53:04.025099 | orchestrator | ++ export TEMPEST=false 2026-02-19 04:53:04.025115 | orchestrator | ++ TEMPEST=false 2026-02-19 04:53:04.025130 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 04:53:04.025145 | orchestrator | ++ IS_ZUUL=true 2026-02-19 04:53:04.025160 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 04:53:04.025174 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 04:53:04.025189 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 04:53:04.025203 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 04:53:04.025218 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 04:53:04.025233 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 04:53:04.025250 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 04:53:04.025266 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 04:53:04.025279 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 04:53:04.025294 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 04:53:04.025308 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-19 04:53:04.025321 | orchestrator | + source /etc/os-release 2026-02-19 04:53:04.025334 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-19 04:53:04.025349 | orchestrator | ++ NAME=Ubuntu 2026-02-19 04:53:04.025375 | orchestrator | ++ VERSION_ID=24.04 2026-02-19 04:53:04.025387 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-19 04:53:04.025400 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-19 04:53:04.025413 | orchestrator | ++ ID=ubuntu 2026-02-19 04:53:04.025425 | orchestrator | ++ ID_LIKE=debian 2026-02-19 04:53:04.025437 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-19 04:53:04.025449 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-19 04:53:04.025461 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-19 04:53:04.025473 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-19 04:53:04.025487 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-19 04:53:04.025499 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-19 04:53:04.025512 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-19 04:53:04.025524 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-19 04:53:04.025537 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-19 04:53:04.039451 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-19 04:53:24.855756 | orchestrator | 2026-02-19 04:53:24.855900 | orchestrator | # Status of Elasticsearch 2026-02-19 04:53:24.855915 | orchestrator | 2026-02-19 04:53:24.855925 | orchestrator | + pushd /opt/configuration/contrib 2026-02-19 04:53:24.855936 | orchestrator | + echo 2026-02-19 04:53:24.855945 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-19 04:53:24.855954 | orchestrator | + echo 2026-02-19 04:53:24.855964 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-19 04:53:25.047895 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-19 04:53:25.048015 | orchestrator | 2026-02-19 04:53:25.048029 | orchestrator | # Status of MariaDB 2026-02-19 04:53:25.048041 | orchestrator | 2026-02-19 04:53:25.048051 | orchestrator | + echo 2026-02-19 04:53:25.048093 | orchestrator | + echo '# Status of MariaDB' 2026-02-19 04:53:25.048104 | orchestrator | + echo 2026-02-19 04:53:25.049090 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-19 04:53:25.116570 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-19 04:53:25.116734 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-19 04:53:25.116752 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-19 04:53:25.116766 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-19 04:53:25.186885 | orchestrator | Reading package lists... 2026-02-19 04:53:25.583674 | orchestrator | Building dependency tree... 2026-02-19 04:53:25.584153 | orchestrator | Reading state information... 2026-02-19 04:53:25.977810 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-19 04:53:25.977910 | orchestrator | bc set to manually installed. 2026-02-19 04:53:25.977927 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-19 04:53:26.656847 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-19 04:53:26.656969 | orchestrator | 2026-02-19 04:53:26.656993 | orchestrator | # Status of Prometheus 2026-02-19 04:53:26.657011 | orchestrator | 2026-02-19 04:53:26.657029 | orchestrator | + echo 2026-02-19 04:53:26.657049 | orchestrator | + echo '# Status of Prometheus' 2026-02-19 04:53:26.657066 | orchestrator | + echo 2026-02-19 04:53:26.657084 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-19 04:53:26.708338 | orchestrator | Unauthorized 2026-02-19 04:53:26.714456 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-19 04:53:26.776576 | orchestrator | Unauthorized 2026-02-19 04:53:26.782212 | orchestrator | 2026-02-19 04:53:26.782391 | orchestrator | # Status of RabbitMQ 2026-02-19 04:53:26.782407 | orchestrator | 2026-02-19 04:53:26.782420 | orchestrator | + echo 2026-02-19 04:53:26.782431 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-19 04:53:26.782449 | orchestrator | + echo 2026-02-19 04:53:26.782482 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-19 04:53:26.848541 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-19 04:53:26.848667 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-19 04:53:26.848684 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-19 04:53:27.265416 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-19 04:53:27.274316 | orchestrator | 2026-02-19 04:53:27.274398 | orchestrator | # Status of Redis 2026-02-19 04:53:27.274411 | orchestrator | 2026-02-19 04:53:27.274422 | orchestrator | + echo 2026-02-19 04:53:27.274434 | orchestrator | + echo '# Status of Redis' 2026-02-19 04:53:27.274445 | orchestrator | + echo 2026-02-19 04:53:27.274458 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-19 04:53:27.279777 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002045s;;;0.000000;10.000000 2026-02-19 04:53:27.279863 | orchestrator | 2026-02-19 04:53:27.279878 | orchestrator | # Create backup of MariaDB database 2026-02-19 04:53:27.279889 | orchestrator | 2026-02-19 04:53:27.279900 | orchestrator | + popd 2026-02-19 04:53:27.279912 | orchestrator | + echo 2026-02-19 04:53:27.279923 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-19 04:53:27.279933 | orchestrator | + echo 2026-02-19 04:53:27.279945 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-19 04:53:29.317799 | orchestrator | 2026-02-19 04:53:29 | INFO  | Task 06a8fd3e-8a17-4c3b-a379-449322d490cf (mariadb_backup) was prepared for execution. 2026-02-19 04:53:29.317870 | orchestrator | 2026-02-19 04:53:29 | INFO  | It takes a moment until task 06a8fd3e-8a17-4c3b-a379-449322d490cf (mariadb_backup) has been started and output is visible here. 2026-02-19 04:53:58.269280 | orchestrator | 2026-02-19 04:53:58.269388 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 04:53:58.269406 | orchestrator | 2026-02-19 04:53:58.269418 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 04:53:58.269430 | orchestrator | Thursday 19 February 2026 04:53:33 +0000 (0:00:00.184) 0:00:00.184 ***** 2026-02-19 04:53:58.269441 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:53:58.269454 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:53:58.269465 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:53:58.269476 | orchestrator | 2026-02-19 04:53:58.269487 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 04:53:58.269522 | orchestrator | Thursday 19 February 2026 04:53:33 +0000 (0:00:00.335) 0:00:00.520 ***** 2026-02-19 04:53:58.269534 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-19 04:53:58.269545 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-19 04:53:58.269563 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-19 04:53:58.269582 | orchestrator | 2026-02-19 04:53:58.269611 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-19 04:53:58.269662 | orchestrator | 2026-02-19 04:53:58.269682 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-19 04:53:58.269700 | orchestrator | Thursday 19 February 2026 04:53:34 +0000 (0:00:00.572) 0:00:01.093 ***** 2026-02-19 04:53:58.269717 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 04:53:58.269736 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 04:53:58.269755 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 04:53:58.269772 | orchestrator | 2026-02-19 04:53:58.269791 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 04:53:58.269808 | orchestrator | Thursday 19 February 2026 04:53:34 +0000 (0:00:00.412) 0:00:01.505 ***** 2026-02-19 04:53:58.269830 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 04:53:58.269852 | orchestrator | 2026-02-19 04:53:58.269872 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-19 04:53:58.269903 | orchestrator | Thursday 19 February 2026 04:53:35 +0000 (0:00:00.577) 0:00:02.082 ***** 2026-02-19 04:53:58.269915 | orchestrator | ok: [testbed-node-0] 2026-02-19 04:53:58.269926 | orchestrator | ok: [testbed-node-1] 2026-02-19 04:53:58.269937 | orchestrator | ok: [testbed-node-2] 2026-02-19 04:53:58.269947 | orchestrator | 2026-02-19 04:53:58.269958 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-19 04:53:58.269969 | orchestrator | Thursday 19 February 2026 04:53:38 +0000 (0:00:03.200) 0:00:05.283 ***** 2026-02-19 04:53:58.269979 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-19 04:53:58.269990 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-19 04:53:58.270001 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-19 04:53:58.270012 | orchestrator | mariadb_bootstrap_restart 2026-02-19 04:53:58.270089 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:53:58.270100 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:53:58.270111 | orchestrator | changed: [testbed-node-0] 2026-02-19 04:53:58.270122 | orchestrator | 2026-02-19 04:53:58.270132 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-19 04:53:58.270143 | orchestrator | skipping: no hosts matched 2026-02-19 04:53:58.270154 | orchestrator | 2026-02-19 04:53:58.270164 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-19 04:53:58.270175 | orchestrator | skipping: no hosts matched 2026-02-19 04:53:58.270186 | orchestrator | 2026-02-19 04:53:58.270196 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-19 04:53:58.270207 | orchestrator | skipping: no hosts matched 2026-02-19 04:53:58.270218 | orchestrator | 2026-02-19 04:53:58.270228 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-19 04:53:58.270239 | orchestrator | 2026-02-19 04:53:58.270250 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-19 04:53:58.270260 | orchestrator | Thursday 19 February 2026 04:53:57 +0000 (0:00:18.450) 0:00:23.734 ***** 2026-02-19 04:53:58.270271 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:53:58.270281 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:53:58.270292 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:53:58.270303 | orchestrator | 2026-02-19 04:53:58.270314 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-19 04:53:58.270336 | orchestrator | Thursday 19 February 2026 04:53:57 +0000 (0:00:00.310) 0:00:24.044 ***** 2026-02-19 04:53:58.270347 | orchestrator | skipping: [testbed-node-0] 2026-02-19 04:53:58.270357 | orchestrator | skipping: [testbed-node-1] 2026-02-19 04:53:58.270368 | orchestrator | skipping: [testbed-node-2] 2026-02-19 04:53:58.270378 | orchestrator | 2026-02-19 04:53:58.270389 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:53:58.270401 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 04:53:58.270413 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 04:53:58.270424 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 04:53:58.270435 | orchestrator | 2026-02-19 04:53:58.270445 | orchestrator | 2026-02-19 04:53:58.270456 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:53:58.270467 | orchestrator | Thursday 19 February 2026 04:53:57 +0000 (0:00:00.408) 0:00:24.452 ***** 2026-02-19 04:53:58.270478 | orchestrator | =============================================================================== 2026-02-19 04:53:58.270489 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.45s 2026-02-19 04:53:58.270519 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.20s 2026-02-19 04:53:58.270531 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2026-02-19 04:53:58.270543 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-02-19 04:53:58.270563 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-02-19 04:53:58.270583 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2026-02-19 04:53:58.270604 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-19 04:53:58.270626 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2026-02-19 04:53:58.591428 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-19 04:53:58.602122 | orchestrator | + set -e 2026-02-19 04:53:58.602222 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 04:53:58.602779 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 04:53:58.602896 | orchestrator | ++ INTERACTIVE=false 2026-02-19 04:53:58.602916 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 04:53:58.602936 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 04:53:58.602964 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-19 04:53:58.605330 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-19 04:53:58.613856 | orchestrator | 2026-02-19 04:53:58.613935 | orchestrator | # OpenStack endpoints 2026-02-19 04:53:58.613948 | orchestrator | 2026-02-19 04:53:58.613959 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 04:53:58.613971 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 04:53:58.613981 | orchestrator | + export OS_CLOUD=admin 2026-02-19 04:53:58.613992 | orchestrator | + OS_CLOUD=admin 2026-02-19 04:53:58.614003 | orchestrator | + echo 2026-02-19 04:53:58.614014 | orchestrator | + echo '# OpenStack endpoints' 2026-02-19 04:53:58.614081 | orchestrator | + echo 2026-02-19 04:53:58.614093 | orchestrator | + openstack endpoint list 2026-02-19 04:54:01.780238 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-19 04:54:01.780342 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-19 04:54:01.780358 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-19 04:54:01.780369 | orchestrator | | 05c2425c4a704690b103b9dc461a650c | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-19 04:54:01.780419 | orchestrator | | 0700fa0a93d74d89b113016d6f521fa0 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-19 04:54:01.780432 | orchestrator | | 088cb263626640db977721d7866fc395 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-19 04:54:01.780442 | orchestrator | | 0a4d0f2b69df47599b5b3632ebf89417 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-19 04:54:01.780454 | orchestrator | | 118edbb061a044d095db6aac6222da5f | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-19 04:54:01.780464 | orchestrator | | 21805f065c694625a2da6b07e8752caf | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-19 04:54:01.780474 | orchestrator | | 2b817798b47849ea88c41b055ca3da70 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-19 04:54:01.780483 | orchestrator | | 2d6b9fa6c4034e85b485778a9c02680b | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-19 04:54:01.780492 | orchestrator | | 2ed2c69be8b441d09f1cf5c1aace652f | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-19 04:54:01.780501 | orchestrator | | 383bdff8e20a4ce896156bcd8e593358 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-19 04:54:01.780510 | orchestrator | | 473d259936644a4fa906283b2ea96849 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-19 04:54:01.780520 | orchestrator | | 52a4ac2b12c04001a087c18c7c072e7d | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-19 04:54:01.780529 | orchestrator | | 583ec64d5fe14855bcabcfa21ec4579b | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-19 04:54:01.780538 | orchestrator | | 5e5d755a995e46e3b2c711d872bb751d | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-19 04:54:01.780548 | orchestrator | | 637eee24c92f4f75a60494783bcf9c96 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-19 04:54:01.780558 | orchestrator | | 69c38f1e469d4d78a39fdf724bf9012b | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-19 04:54:01.780567 | orchestrator | | 75daeb86814c44f5873628c035e7939d | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-19 04:54:01.780578 | orchestrator | | 7b5a2eef88a2483c83655b4c4679fd37 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-19 04:54:01.780588 | orchestrator | | 80edc2abb52343599e5ee6c1b19b7b24 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-19 04:54:01.780597 | orchestrator | | 901371c85ecd4ad68047106a1f9cba6d | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-19 04:54:01.780674 | orchestrator | | 9615d24c70c04459a807d1b4ac448feb | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-19 04:54:01.780699 | orchestrator | | 9a32b060cd914befb927e94c6ea5ffc0 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-19 04:54:01.780716 | orchestrator | | a2d3dcd987f14cb7955db7d29c173ba9 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-19 04:54:01.780726 | orchestrator | | af110ae2c15648faac3e1e643c4ebeb2 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-19 04:54:01.780735 | orchestrator | | b117d3de02b44e8ebae3e32f0d7a5e55 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-19 04:54:01.780744 | orchestrator | | c73fae21dec146f192d6e9054ddcb4bc | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-19 04:54:01.780754 | orchestrator | | c839359c849b438b9a4ce6e35861608d | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-19 04:54:01.780760 | orchestrator | | dc0ef777a4274419a0579e50b9cc0a62 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-19 04:54:01.780765 | orchestrator | | ef71956e5d8749ba950d749a9049a354 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-19 04:54:01.780771 | orchestrator | | f1ffe61a3d354ad6a41245da89345402 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-19 04:54:01.780776 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-19 04:54:02.045446 | orchestrator | 2026-02-19 04:54:02.045549 | orchestrator | # Cinder 2026-02-19 04:54:02.045562 | orchestrator | 2026-02-19 04:54:02.045571 | orchestrator | + echo 2026-02-19 04:54:02.045579 | orchestrator | + echo '# Cinder' 2026-02-19 04:54:02.045587 | orchestrator | + echo 2026-02-19 04:54:02.045596 | orchestrator | + openstack volume service list 2026-02-19 04:54:04.698101 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-19 04:54:04.698223 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-19 04:54:04.698240 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-19 04:54:04.698253 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-19T04:54:02.000000 | 2026-02-19 04:54:04.698264 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-19T04:54:02.000000 | 2026-02-19 04:54:04.698275 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-19T04:54:02.000000 | 2026-02-19 04:54:04.698302 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-19T04:54:02.000000 | 2026-02-19 04:54:04.698313 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-19T04:53:59.000000 | 2026-02-19 04:54:04.698324 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-19T04:54:02.000000 | 2026-02-19 04:54:04.698335 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-19T04:54:03.000000 | 2026-02-19 04:54:04.698345 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-19T04:53:56.000000 | 2026-02-19 04:54:04.698356 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-19T04:53:57.000000 | 2026-02-19 04:54:04.698391 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-19 04:54:04.954735 | orchestrator | 2026-02-19 04:54:04.954835 | orchestrator | # Neutron 2026-02-19 04:54:04.954850 | orchestrator | 2026-02-19 04:54:04.954862 | orchestrator | + echo 2026-02-19 04:54:04.954874 | orchestrator | + echo '# Neutron' 2026-02-19 04:54:04.954885 | orchestrator | + echo 2026-02-19 04:54:04.954896 | orchestrator | + openstack network agent list 2026-02-19 04:54:07.659783 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-19 04:54:07.659921 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-19 04:54:07.659947 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-19 04:54:07.659968 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-19 04:54:07.659986 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-19 04:54:07.660005 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-19 04:54:07.660025 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-19 04:54:07.660070 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-19 04:54:07.660092 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-19 04:54:07.660113 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-19 04:54:07.660134 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-19 04:54:07.660154 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-19 04:54:07.660173 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-19 04:54:07.933885 | orchestrator | + openstack network service provider list 2026-02-19 04:54:10.436664 | orchestrator | +---------------+------+---------+ 2026-02-19 04:54:10.436738 | orchestrator | | Service Type | Name | Default | 2026-02-19 04:54:10.436744 | orchestrator | +---------------+------+---------+ 2026-02-19 04:54:10.436749 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-19 04:54:10.436754 | orchestrator | +---------------+------+---------+ 2026-02-19 04:54:10.704297 | orchestrator | 2026-02-19 04:54:10.704412 | orchestrator | # Nova 2026-02-19 04:54:10.704421 | orchestrator | 2026-02-19 04:54:10.704428 | orchestrator | + echo 2026-02-19 04:54:10.704435 | orchestrator | + echo '# Nova' 2026-02-19 04:54:10.704442 | orchestrator | + echo 2026-02-19 04:54:10.704449 | orchestrator | + openstack compute service list 2026-02-19 04:54:13.375846 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-19 04:54:13.376014 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-19 04:54:13.376041 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-19 04:54:13.376065 | orchestrator | | b2a68f23-d5e3-48ad-8585-1d471306ebf2 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-19T04:54:07.000000 | 2026-02-19 04:54:13.376138 | orchestrator | | e9095bbc-298d-455e-aa69-713748dc3be8 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-19T04:54:10.000000 | 2026-02-19 04:54:13.376150 | orchestrator | | 0ff7bc19-2b75-4f05-a208-7cd32ba055c9 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-19T04:54:11.000000 | 2026-02-19 04:54:13.376161 | orchestrator | | d7a5c41c-1f4e-4167-9d61-b7a3f69b7290 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-19T04:54:09.000000 | 2026-02-19 04:54:13.376172 | orchestrator | | bf1d9fed-2711-442e-be3d-d9c615056968 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-19T04:54:12.000000 | 2026-02-19 04:54:13.376183 | orchestrator | | 90142925-8f0f-412f-8de9-98accb7eaeba | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-19T04:54:12.000000 | 2026-02-19 04:54:13.376194 | orchestrator | | be763874-8977-4b9c-adc6-65d918302fa5 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-19T04:54:03.000000 | 2026-02-19 04:54:13.376204 | orchestrator | | c9228503-63fd-49d6-8960-cb0b27262ce2 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-19T04:54:04.000000 | 2026-02-19 04:54:13.376215 | orchestrator | | 0cb3db62-7e75-4931-96dc-0ae8d92d1385 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-19T04:54:04.000000 | 2026-02-19 04:54:13.376226 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-19 04:54:13.643572 | orchestrator | + openstack hypervisor list 2026-02-19 04:54:16.894106 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-19 04:54:16.894187 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-19 04:54:16.894194 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-19 04:54:16.894200 | orchestrator | | ef587254-216a-4cbe-a5c0-c9f3b0907aa7 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-19 04:54:16.894204 | orchestrator | | 0c7d9d65-6c3d-4adf-ac4f-66e924afaa88 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-19 04:54:16.894209 | orchestrator | | c64f8e1b-5552-47ae-8c1d-2c509cd9f5fc | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-19 04:54:16.894214 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-19 04:54:17.211567 | orchestrator | 2026-02-19 04:54:17.211700 | orchestrator | # Run OpenStack test play 2026-02-19 04:54:17.211715 | orchestrator | 2026-02-19 04:54:17.211725 | orchestrator | + echo 2026-02-19 04:54:17.211731 | orchestrator | + echo '# Run OpenStack test play' 2026-02-19 04:54:17.211737 | orchestrator | + echo 2026-02-19 04:54:17.211742 | orchestrator | + osism apply --environment openstack test 2026-02-19 04:54:19.181113 | orchestrator | 2026-02-19 04:54:19 | INFO  | Trying to run play test in environment openstack 2026-02-19 04:54:29.332053 | orchestrator | 2026-02-19 04:54:29 | INFO  | Task 03766100-01f6-4dd1-81ec-cf8972a22e27 (test) was prepared for execution. 2026-02-19 04:54:29.332182 | orchestrator | 2026-02-19 04:54:29 | INFO  | It takes a moment until task 03766100-01f6-4dd1-81ec-cf8972a22e27 (test) has been started and output is visible here. 2026-02-19 04:57:15.962313 | orchestrator | 2026-02-19 04:57:15.962427 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-19 04:57:15.962444 | orchestrator | 2026-02-19 04:57:15.962456 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-19 04:57:15.962468 | orchestrator | Thursday 19 February 2026 04:54:33 +0000 (0:00:00.069) 0:00:00.069 ***** 2026-02-19 04:57:15.962479 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962491 | orchestrator | 2026-02-19 04:57:15.962502 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-19 04:57:15.962513 | orchestrator | Thursday 19 February 2026 04:54:37 +0000 (0:00:03.652) 0:00:03.721 ***** 2026-02-19 04:57:15.962524 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962535 | orchestrator | 2026-02-19 04:57:15.962569 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-19 04:57:15.962581 | orchestrator | Thursday 19 February 2026 04:54:41 +0000 (0:00:04.164) 0:00:07.886 ***** 2026-02-19 04:57:15.962591 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962602 | orchestrator | 2026-02-19 04:57:15.962613 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-19 04:57:15.962624 | orchestrator | Thursday 19 February 2026 04:54:47 +0000 (0:00:06.575) 0:00:14.461 ***** 2026-02-19 04:57:15.962634 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962645 | orchestrator | 2026-02-19 04:57:15.962656 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-19 04:57:15.962667 | orchestrator | Thursday 19 February 2026 04:54:51 +0000 (0:00:04.025) 0:00:18.487 ***** 2026-02-19 04:57:15.962677 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962688 | orchestrator | 2026-02-19 04:57:15.962727 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-19 04:57:15.962738 | orchestrator | Thursday 19 February 2026 04:54:56 +0000 (0:00:04.094) 0:00:22.581 ***** 2026-02-19 04:57:15.962750 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-19 04:57:15.962762 | orchestrator | changed: [localhost] => (item=member) 2026-02-19 04:57:15.962773 | orchestrator | changed: [localhost] => (item=creator) 2026-02-19 04:57:15.962784 | orchestrator | 2026-02-19 04:57:15.962795 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-19 04:57:15.962806 | orchestrator | Thursday 19 February 2026 04:55:07 +0000 (0:00:11.579) 0:00:34.160 ***** 2026-02-19 04:57:15.962817 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962827 | orchestrator | 2026-02-19 04:57:15.962838 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-19 04:57:15.962849 | orchestrator | Thursday 19 February 2026 04:55:11 +0000 (0:00:04.265) 0:00:38.426 ***** 2026-02-19 04:57:15.962861 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962873 | orchestrator | 2026-02-19 04:57:15.962885 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-19 04:57:15.962898 | orchestrator | Thursday 19 February 2026 04:55:16 +0000 (0:00:05.002) 0:00:43.429 ***** 2026-02-19 04:57:15.962911 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962923 | orchestrator | 2026-02-19 04:57:15.962936 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-19 04:57:15.962948 | orchestrator | Thursday 19 February 2026 04:55:21 +0000 (0:00:04.224) 0:00:47.653 ***** 2026-02-19 04:57:15.962959 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.962970 | orchestrator | 2026-02-19 04:57:15.962980 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-19 04:57:15.962991 | orchestrator | Thursday 19 February 2026 04:55:25 +0000 (0:00:03.974) 0:00:51.628 ***** 2026-02-19 04:57:15.963002 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.963012 | orchestrator | 2026-02-19 04:57:15.963023 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-19 04:57:15.963034 | orchestrator | Thursday 19 February 2026 04:55:29 +0000 (0:00:04.131) 0:00:55.759 ***** 2026-02-19 04:57:15.963044 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.963055 | orchestrator | 2026-02-19 04:57:15.963066 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-19 04:57:15.963076 | orchestrator | Thursday 19 February 2026 04:55:33 +0000 (0:00:04.669) 0:01:00.428 ***** 2026-02-19 04:57:15.963087 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.963098 | orchestrator | 2026-02-19 04:57:15.963109 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-19 04:57:15.963120 | orchestrator | Thursday 19 February 2026 04:55:38 +0000 (0:00:04.642) 0:01:05.071 ***** 2026-02-19 04:57:15.963131 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.963142 | orchestrator | 2026-02-19 04:57:15.963153 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-19 04:57:15.963163 | orchestrator | Thursday 19 February 2026 04:55:44 +0000 (0:00:05.478) 0:01:10.550 ***** 2026-02-19 04:57:15.963181 | orchestrator | changed: [localhost] 2026-02-19 04:57:15.963192 | orchestrator | 2026-02-19 04:57:15.963203 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-19 04:57:15.963214 | orchestrator | 2026-02-19 04:57:15.963224 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-19 04:57:15.963235 | orchestrator | Thursday 19 February 2026 04:55:56 +0000 (0:00:12.009) 0:01:22.559 ***** 2026-02-19 04:57:15.963246 | orchestrator | ok: [localhost] 2026-02-19 04:57:15.963257 | orchestrator | 2026-02-19 04:57:15.963267 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-19 04:57:15.963278 | orchestrator | Thursday 19 February 2026 04:55:59 +0000 (0:00:03.591) 0:01:26.151 ***** 2026-02-19 04:57:15.963289 | orchestrator | skipping: [localhost] 2026-02-19 04:57:15.963299 | orchestrator | 2026-02-19 04:57:15.963310 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-19 04:57:15.963321 | orchestrator | Thursday 19 February 2026 04:55:59 +0000 (0:00:00.039) 0:01:26.190 ***** 2026-02-19 04:57:15.963331 | orchestrator | skipping: [localhost] 2026-02-19 04:57:15.963342 | orchestrator | 2026-02-19 04:57:15.963352 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-19 04:57:15.963363 | orchestrator | Thursday 19 February 2026 04:55:59 +0000 (0:00:00.049) 0:01:26.240 ***** 2026-02-19 04:57:15.963387 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-19 04:57:15.963399 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-19 04:57:15.963428 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-19 04:57:15.963440 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-19 04:57:15.963451 | orchestrator | skipping: [localhost] => (item=test)  2026-02-19 04:57:15.963461 | orchestrator | skipping: [localhost] 2026-02-19 04:57:15.963472 | orchestrator | 2026-02-19 04:57:15.963482 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-19 04:57:15.963493 | orchestrator | Thursday 19 February 2026 04:55:59 +0000 (0:00:00.159) 0:01:26.400 ***** 2026-02-19 04:57:15.963504 | orchestrator | skipping: [localhost] 2026-02-19 04:57:15.963514 | orchestrator | 2026-02-19 04:57:15.963525 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-19 04:57:15.963536 | orchestrator | Thursday 19 February 2026 04:56:00 +0000 (0:00:00.163) 0:01:26.563 ***** 2026-02-19 04:57:15.963546 | orchestrator | changed: [localhost] => (item=test) 2026-02-19 04:57:15.963557 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-19 04:57:15.963567 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-19 04:57:15.963578 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-19 04:57:15.963589 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-19 04:57:15.963599 | orchestrator | 2026-02-19 04:57:15.963610 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-19 04:57:15.963620 | orchestrator | Thursday 19 February 2026 04:56:04 +0000 (0:00:04.576) 0:01:31.140 ***** 2026-02-19 04:57:15.963631 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-19 04:57:15.963642 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-19 04:57:15.963653 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-19 04:57:15.963664 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-19 04:57:15.963676 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j428686965651.3692', 'results_file': '/ansible/.ansible_async/j428686965651.3692', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:15.963689 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-19 04:57:15.963717 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j644784025856.3717', 'results_file': '/ansible/.ansible_async/j644784025856.3717', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:15.963736 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j432672754678.3742', 'results_file': '/ansible/.ansible_async/j432672754678.3742', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:15.963747 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j882069281362.3767', 'results_file': '/ansible/.ansible_async/j882069281362.3767', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:15.963758 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j196379781423.3792', 'results_file': '/ansible/.ansible_async/j196379781423.3792', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:15.963769 | orchestrator | 2026-02-19 04:57:15.963779 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-19 04:57:15.963790 | orchestrator | Thursday 19 February 2026 04:57:02 +0000 (0:00:57.478) 0:02:28.618 ***** 2026-02-19 04:57:15.963801 | orchestrator | changed: [localhost] => (item=test) 2026-02-19 04:57:15.963811 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-19 04:57:15.963822 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-19 04:57:15.963832 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-19 04:57:15.963843 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-19 04:57:15.963853 | orchestrator | 2026-02-19 04:57:15.963864 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-19 04:57:15.963875 | orchestrator | Thursday 19 February 2026 04:57:06 +0000 (0:00:04.486) 0:02:33.105 ***** 2026-02-19 04:57:15.963886 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-19 04:57:15.963897 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j162037952190.3903', 'results_file': '/ansible/.ansible_async/j162037952190.3903', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:15.963908 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j820016992409.3928', 'results_file': '/ansible/.ansible_async/j820016992409.3928', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:15.963920 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j546782227675.3953', 'results_file': '/ansible/.ansible_async/j546782227675.3953', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:15.963945 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j615722633555.3978', 'results_file': '/ansible/.ansible_async/j615722633555.3978', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:56.314423 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j448294957800.4003', 'results_file': '/ansible/.ansible_async/j448294957800.4003', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:56.314532 | orchestrator | 2026-02-19 04:57:56.314546 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-19 04:57:56.314556 | orchestrator | Thursday 19 February 2026 04:57:15 +0000 (0:00:09.352) 0:02:42.457 ***** 2026-02-19 04:57:56.314565 | orchestrator | changed: [localhost] => (item=test) 2026-02-19 04:57:56.314576 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-19 04:57:56.314585 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-19 04:57:56.314594 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-19 04:57:56.314607 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-19 04:57:56.314622 | orchestrator | 2026-02-19 04:57:56.314666 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-19 04:57:56.314683 | orchestrator | Thursday 19 February 2026 04:57:21 +0000 (0:00:05.073) 0:02:47.531 ***** 2026-02-19 04:57:56.314698 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-19 04:57:56.314741 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j862925315273.4072', 'results_file': '/ansible/.ansible_async/j862925315273.4072', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:56.314757 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j990118601292.4097', 'results_file': '/ansible/.ansible_async/j990118601292.4097', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:56.314773 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j945919670351.4123', 'results_file': '/ansible/.ansible_async/j945919670351.4123', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:56.314788 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j265290768839.4156', 'results_file': '/ansible/.ansible_async/j265290768839.4156', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:56.314804 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j941592939016.4182', 'results_file': '/ansible/.ansible_async/j941592939016.4182', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-19 04:57:56.314836 | orchestrator | 2026-02-19 04:57:56.314858 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-19 04:57:56.314867 | orchestrator | Thursday 19 February 2026 04:57:30 +0000 (0:00:09.572) 0:02:57.103 ***** 2026-02-19 04:57:56.314876 | orchestrator | changed: [localhost] 2026-02-19 04:57:56.314885 | orchestrator | 2026-02-19 04:57:56.314894 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-19 04:57:56.314902 | orchestrator | Thursday 19 February 2026 04:57:37 +0000 (0:00:06.463) 0:03:03.566 ***** 2026-02-19 04:57:56.314911 | orchestrator | changed: [localhost] 2026-02-19 04:57:56.314919 | orchestrator | 2026-02-19 04:57:56.314928 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-19 04:57:56.314937 | orchestrator | Thursday 19 February 2026 04:57:50 +0000 (0:00:13.477) 0:03:17.044 ***** 2026-02-19 04:57:56.314945 | orchestrator | ok: [localhost] 2026-02-19 04:57:56.314954 | orchestrator | 2026-02-19 04:57:56.314963 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-19 04:57:56.314973 | orchestrator | Thursday 19 February 2026 04:57:56 +0000 (0:00:05.477) 0:03:22.522 ***** 2026-02-19 04:57:56.314982 | orchestrator | ok: [localhost] => { 2026-02-19 04:57:56.314992 | orchestrator |  "msg": "192.168.112.155" 2026-02-19 04:57:56.315001 | orchestrator | } 2026-02-19 04:57:56.315011 | orchestrator | 2026-02-19 04:57:56.315024 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 04:57:56.315041 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 04:57:56.315056 | orchestrator | 2026-02-19 04:57:56.315076 | orchestrator | 2026-02-19 04:57:56.315094 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 04:57:56.315108 | orchestrator | Thursday 19 February 2026 04:57:56 +0000 (0:00:00.044) 0:03:22.567 ***** 2026-02-19 04:57:56.315122 | orchestrator | =============================================================================== 2026-02-19 04:57:56.315135 | orchestrator | Wait for instance creation to complete --------------------------------- 57.48s 2026-02-19 04:57:56.315156 | orchestrator | Attach test volume ----------------------------------------------------- 13.48s 2026-02-19 04:57:56.315171 | orchestrator | Create test router ----------------------------------------------------- 12.01s 2026-02-19 04:57:56.315214 | orchestrator | Add member roles to user test ------------------------------------------ 11.58s 2026-02-19 04:57:56.315231 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.57s 2026-02-19 04:57:56.315245 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.35s 2026-02-19 04:57:56.315260 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.58s 2026-02-19 04:57:56.315292 | orchestrator | Create test volume ------------------------------------------------------ 6.46s 2026-02-19 04:57:56.315302 | orchestrator | Create test subnet ------------------------------------------------------ 5.48s 2026-02-19 04:57:56.315311 | orchestrator | Create floating ip address ---------------------------------------------- 5.48s 2026-02-19 04:57:56.315326 | orchestrator | Add tag to instances ---------------------------------------------------- 5.07s 2026-02-19 04:57:56.315347 | orchestrator | Create ssh security group ----------------------------------------------- 5.00s 2026-02-19 04:57:56.315363 | orchestrator | Create test keypair ----------------------------------------------------- 4.67s 2026-02-19 04:57:56.315376 | orchestrator | Create test network ----------------------------------------------------- 4.64s 2026-02-19 04:57:56.315394 | orchestrator | Create test instances --------------------------------------------------- 4.58s 2026-02-19 04:57:56.315413 | orchestrator | Add metadata to instances ----------------------------------------------- 4.49s 2026-02-19 04:57:56.315427 | orchestrator | Create test server group ------------------------------------------------ 4.27s 2026-02-19 04:57:56.315441 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.22s 2026-02-19 04:57:56.315456 | orchestrator | Create test-admin user -------------------------------------------------- 4.16s 2026-02-19 04:57:56.315470 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.13s 2026-02-19 04:57:56.631776 | orchestrator | + server_list 2026-02-19 04:57:56.631899 | orchestrator | + openstack --os-cloud test server list 2026-02-19 04:58:00.409997 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-19 04:58:00.410175 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-19 04:58:00.410191 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-19 04:58:00.410203 | orchestrator | | b8cd6024-a7a4-4ca1-8ff0-9c7b81491f08 | test-4 | ACTIVE | test=192.168.112.159, 192.168.200.16 | N/A (booted from volume) | SCS-1L-1 | 2026-02-19 04:58:00.410215 | orchestrator | | 0f5cf448-e434-410c-93a7-bb8a4d8ba6a5 | test-3 | ACTIVE | test=192.168.112.174, 192.168.200.79 | N/A (booted from volume) | SCS-1L-1 | 2026-02-19 04:58:00.410226 | orchestrator | | 593e1197-bca7-4899-a536-093b79ea82e4 | test | ACTIVE | test=192.168.112.155, 192.168.200.221 | N/A (booted from volume) | SCS-1L-1 | 2026-02-19 04:58:00.410237 | orchestrator | | 65b20900-b6c7-4331-bd04-c2bac166016a | test-1 | ACTIVE | test=192.168.112.195, 192.168.200.228 | N/A (booted from volume) | SCS-1L-1 | 2026-02-19 04:58:00.410247 | orchestrator | | fa184b96-fd28-47be-95c4-06593a543b79 | test-2 | ACTIVE | test=192.168.112.191, 192.168.200.64 | N/A (booted from volume) | SCS-1L-1 | 2026-02-19 04:58:00.410258 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-19 04:58:00.703126 | orchestrator | + openstack --os-cloud test server show test 2026-02-19 04:58:03.972360 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:03.972485 | orchestrator | | Field | Value | 2026-02-19 04:58:03.972537 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:03.972557 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-19 04:58:03.972569 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-19 04:58:03.972581 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-19 04:58:03.972592 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-19 04:58:03.972603 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-19 04:58:03.972615 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-19 04:58:03.972644 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-19 04:58:03.972656 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-19 04:58:03.972675 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-19 04:58:03.972833 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-19 04:58:03.972853 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-19 04:58:03.972867 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-19 04:58:03.972880 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-19 04:58:03.972896 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-19 04:58:03.972916 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-19 04:58:03.972935 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-19T04:56:37.000000 | 2026-02-19 04:58:03.972967 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-19 04:58:03.973005 | orchestrator | | accessIPv4 | | 2026-02-19 04:58:03.973090 | orchestrator | | accessIPv6 | | 2026-02-19 04:58:03.973113 | orchestrator | | addresses | test=192.168.112.155, 192.168.200.221 | 2026-02-19 04:58:03.973143 | orchestrator | | config_drive | | 2026-02-19 04:58:03.973164 | orchestrator | | created | 2026-02-19T04:56:10Z | 2026-02-19 04:58:03.973183 | orchestrator | | description | None | 2026-02-19 04:58:03.973203 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-19 04:58:03.973224 | orchestrator | | hostId | c0b5ddccf018f2f6bbabe7a5878d49acd3a9df20a57d41ae3c781a25 | 2026-02-19 04:58:03.973244 | orchestrator | | host_status | None | 2026-02-19 04:58:03.973292 | orchestrator | | id | 593e1197-bca7-4899-a536-093b79ea82e4 | 2026-02-19 04:58:03.973314 | orchestrator | | image | N/A (booted from volume) | 2026-02-19 04:58:03.973333 | orchestrator | | key_name | test | 2026-02-19 04:58:03.973354 | orchestrator | | locked | False | 2026-02-19 04:58:03.973374 | orchestrator | | locked_reason | None | 2026-02-19 04:58:03.973394 | orchestrator | | name | test | 2026-02-19 04:58:03.973414 | orchestrator | | pinned_availability_zone | None | 2026-02-19 04:58:03.973428 | orchestrator | | progress | 0 | 2026-02-19 04:58:03.973439 | orchestrator | | project_id | ec1ac6dba5c14dacbcc242da667a8aa8 | 2026-02-19 04:58:03.973451 | orchestrator | | properties | hostname='test' | 2026-02-19 04:58:03.973486 | orchestrator | | security_groups | name='ssh' | 2026-02-19 04:58:03.973499 | orchestrator | | | name='icmp' | 2026-02-19 04:58:03.973510 | orchestrator | | server_groups | None | 2026-02-19 04:58:03.973521 | orchestrator | | status | ACTIVE | 2026-02-19 04:58:03.973542 | orchestrator | | tags | test | 2026-02-19 04:58:03.973554 | orchestrator | | trusted_image_certificates | None | 2026-02-19 04:58:03.973565 | orchestrator | | updated | 2026-02-19T04:57:07Z | 2026-02-19 04:58:03.973576 | orchestrator | | user_id | 598097b67fa4427d8c4920dfee00e418 | 2026-02-19 04:58:03.973587 | orchestrator | | volumes_attached | delete_on_termination='True', id='5adbaa10-5053-4879-9d1b-effc522e2a2a' | 2026-02-19 04:58:03.973604 | orchestrator | | | delete_on_termination='False', id='4bdcdcf6-d566-4164-8e32-674de76243e1' | 2026-02-19 04:58:03.975053 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:04.257450 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-19 04:58:07.470352 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:07.470466 | orchestrator | | Field | Value | 2026-02-19 04:58:07.470498 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:07.470510 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-19 04:58:07.470520 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-19 04:58:07.470530 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-19 04:58:07.470540 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-19 04:58:07.470570 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-19 04:58:07.470581 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-19 04:58:07.470609 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-19 04:58:07.470619 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-19 04:58:07.470629 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-19 04:58:07.470644 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-19 04:58:07.470654 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-19 04:58:07.470664 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-19 04:58:07.470674 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-19 04:58:07.470691 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-19 04:58:07.470702 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-19 04:58:07.470777 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-19T04:56:39.000000 | 2026-02-19 04:58:07.470797 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-19 04:58:07.470807 | orchestrator | | accessIPv4 | | 2026-02-19 04:58:07.470817 | orchestrator | | accessIPv6 | | 2026-02-19 04:58:07.470832 | orchestrator | | addresses | test=192.168.112.195, 192.168.200.228 | 2026-02-19 04:58:07.470842 | orchestrator | | config_drive | | 2026-02-19 04:58:07.470852 | orchestrator | | created | 2026-02-19T04:56:10Z | 2026-02-19 04:58:07.470869 | orchestrator | | description | None | 2026-02-19 04:58:07.470879 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-19 04:58:07.470889 | orchestrator | | hostId | c0b5ddccf018f2f6bbabe7a5878d49acd3a9df20a57d41ae3c781a25 | 2026-02-19 04:58:07.470899 | orchestrator | | host_status | None | 2026-02-19 04:58:07.470917 | orchestrator | | id | 65b20900-b6c7-4331-bd04-c2bac166016a | 2026-02-19 04:58:07.470927 | orchestrator | | image | N/A (booted from volume) | 2026-02-19 04:58:07.470937 | orchestrator | | key_name | test | 2026-02-19 04:58:07.470951 | orchestrator | | locked | False | 2026-02-19 04:58:07.470961 | orchestrator | | locked_reason | None | 2026-02-19 04:58:07.470971 | orchestrator | | name | test-1 | 2026-02-19 04:58:07.470987 | orchestrator | | pinned_availability_zone | None | 2026-02-19 04:58:07.470997 | orchestrator | | progress | 0 | 2026-02-19 04:58:07.471006 | orchestrator | | project_id | ec1ac6dba5c14dacbcc242da667a8aa8 | 2026-02-19 04:58:07.471016 | orchestrator | | properties | hostname='test-1' | 2026-02-19 04:58:07.471033 | orchestrator | | security_groups | name='ssh' | 2026-02-19 04:58:07.471043 | orchestrator | | | name='icmp' | 2026-02-19 04:58:07.471053 | orchestrator | | server_groups | None | 2026-02-19 04:58:07.471062 | orchestrator | | status | ACTIVE | 2026-02-19 04:58:07.471073 | orchestrator | | tags | test | 2026-02-19 04:58:07.471088 | orchestrator | | trusted_image_certificates | None | 2026-02-19 04:58:07.471098 | orchestrator | | updated | 2026-02-19T04:57:08Z | 2026-02-19 04:58:07.471107 | orchestrator | | user_id | 598097b67fa4427d8c4920dfee00e418 | 2026-02-19 04:58:07.471117 | orchestrator | | volumes_attached | delete_on_termination='True', id='b776b0bb-1258-40a6-85a9-5b52b19c0e9c' | 2026-02-19 04:58:07.475667 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:07.736468 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-19 04:58:10.784142 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:10.784236 | orchestrator | | Field | Value | 2026-02-19 04:58:10.784266 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:10.784280 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-19 04:58:10.784307 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-19 04:58:10.784317 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-19 04:58:10.784326 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-19 04:58:10.784335 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-19 04:58:10.784344 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-19 04:58:10.784368 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-19 04:58:10.784377 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-19 04:58:10.784386 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-19 04:58:10.784395 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-19 04:58:10.784414 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-19 04:58:10.784423 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-19 04:58:10.784432 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-19 04:58:10.784441 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-19 04:58:10.784449 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-19 04:58:10.784458 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-19T04:56:37.000000 | 2026-02-19 04:58:10.784473 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-19 04:58:10.784482 | orchestrator | | accessIPv4 | | 2026-02-19 04:58:10.784490 | orchestrator | | accessIPv6 | | 2026-02-19 04:58:10.784503 | orchestrator | | addresses | test=192.168.112.191, 192.168.200.64 | 2026-02-19 04:58:10.784519 | orchestrator | | config_drive | | 2026-02-19 04:58:10.784528 | orchestrator | | created | 2026-02-19T04:56:10Z | 2026-02-19 04:58:10.784536 | orchestrator | | description | None | 2026-02-19 04:58:10.784545 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-19 04:58:10.784554 | orchestrator | | hostId | c0b5ddccf018f2f6bbabe7a5878d49acd3a9df20a57d41ae3c781a25 | 2026-02-19 04:58:10.784563 | orchestrator | | host_status | None | 2026-02-19 04:58:10.784578 | orchestrator | | id | fa184b96-fd28-47be-95c4-06593a543b79 | 2026-02-19 04:58:10.784587 | orchestrator | | image | N/A (booted from volume) | 2026-02-19 04:58:10.784596 | orchestrator | | key_name | test | 2026-02-19 04:58:10.784614 | orchestrator | | locked | False | 2026-02-19 04:58:10.784623 | orchestrator | | locked_reason | None | 2026-02-19 04:58:10.784632 | orchestrator | | name | test-2 | 2026-02-19 04:58:10.784640 | orchestrator | | pinned_availability_zone | None | 2026-02-19 04:58:10.784649 | orchestrator | | progress | 0 | 2026-02-19 04:58:10.784658 | orchestrator | | project_id | ec1ac6dba5c14dacbcc242da667a8aa8 | 2026-02-19 04:58:10.784667 | orchestrator | | properties | hostname='test-2' | 2026-02-19 04:58:10.784682 | orchestrator | | security_groups | name='ssh' | 2026-02-19 04:58:10.784693 | orchestrator | | | name='icmp' | 2026-02-19 04:58:10.784785 | orchestrator | | server_groups | None | 2026-02-19 04:58:10.784803 | orchestrator | | status | ACTIVE | 2026-02-19 04:58:10.784814 | orchestrator | | tags | test | 2026-02-19 04:58:10.784824 | orchestrator | | trusted_image_certificates | None | 2026-02-19 04:58:10.784835 | orchestrator | | updated | 2026-02-19T04:57:09Z | 2026-02-19 04:58:10.784845 | orchestrator | | user_id | 598097b67fa4427d8c4920dfee00e418 | 2026-02-19 04:58:10.784855 | orchestrator | | volumes_attached | delete_on_termination='True', id='3b54934e-7b61-4c9f-ab59-1950e59775a5' | 2026-02-19 04:58:10.789478 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:11.036979 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-19 04:58:14.112817 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:14.112945 | orchestrator | | Field | Value | 2026-02-19 04:58:14.112960 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:14.112988 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-19 04:58:14.113000 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-19 04:58:14.113011 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-19 04:58:14.113023 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-19 04:58:14.113034 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-19 04:58:14.113045 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-19 04:58:14.113076 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-19 04:58:14.113095 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-19 04:58:14.113106 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-19 04:58:14.113117 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-19 04:58:14.113133 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-19 04:58:14.113144 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-19 04:58:14.113155 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-19 04:58:14.113166 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-19 04:58:14.113177 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-19 04:58:14.113188 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-19T04:56:38.000000 | 2026-02-19 04:58:14.113206 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-19 04:58:14.113224 | orchestrator | | accessIPv4 | | 2026-02-19 04:58:14.113235 | orchestrator | | accessIPv6 | | 2026-02-19 04:58:14.113246 | orchestrator | | addresses | test=192.168.112.174, 192.168.200.79 | 2026-02-19 04:58:14.113681 | orchestrator | | config_drive | | 2026-02-19 04:58:14.113698 | orchestrator | | created | 2026-02-19T04:56:10Z | 2026-02-19 04:58:14.113711 | orchestrator | | description | None | 2026-02-19 04:58:14.113758 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-19 04:58:14.113777 | orchestrator | | hostId | c0b5ddccf018f2f6bbabe7a5878d49acd3a9df20a57d41ae3c781a25 | 2026-02-19 04:58:14.113797 | orchestrator | | host_status | None | 2026-02-19 04:58:14.113843 | orchestrator | | id | 0f5cf448-e434-410c-93a7-bb8a4d8ba6a5 | 2026-02-19 04:58:14.113872 | orchestrator | | image | N/A (booted from volume) | 2026-02-19 04:58:14.113892 | orchestrator | | key_name | test | 2026-02-19 04:58:14.113913 | orchestrator | | locked | False | 2026-02-19 04:58:14.113933 | orchestrator | | locked_reason | None | 2026-02-19 04:58:14.113950 | orchestrator | | name | test-3 | 2026-02-19 04:58:14.113962 | orchestrator | | pinned_availability_zone | None | 2026-02-19 04:58:14.113973 | orchestrator | | progress | 0 | 2026-02-19 04:58:14.113984 | orchestrator | | project_id | ec1ac6dba5c14dacbcc242da667a8aa8 | 2026-02-19 04:58:14.114004 | orchestrator | | properties | hostname='test-3' | 2026-02-19 04:58:14.114120 | orchestrator | | security_groups | name='ssh' | 2026-02-19 04:58:14.114144 | orchestrator | | | name='icmp' | 2026-02-19 04:58:14.114156 | orchestrator | | server_groups | None | 2026-02-19 04:58:14.114167 | orchestrator | | status | ACTIVE | 2026-02-19 04:58:14.114178 | orchestrator | | tags | test | 2026-02-19 04:58:14.114189 | orchestrator | | trusted_image_certificates | None | 2026-02-19 04:58:14.114200 | orchestrator | | updated | 2026-02-19T04:57:10Z | 2026-02-19 04:58:14.114211 | orchestrator | | user_id | 598097b67fa4427d8c4920dfee00e418 | 2026-02-19 04:58:14.114235 | orchestrator | | volumes_attached | delete_on_termination='True', id='c3115427-50bc-43e3-a64b-5183d47f754c' | 2026-02-19 04:58:14.117348 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:14.406431 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-19 04:58:17.380158 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:17.380278 | orchestrator | | Field | Value | 2026-02-19 04:58:17.380294 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:17.380378 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-19 04:58:17.380391 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-19 04:58:17.380401 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-19 04:58:17.380411 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-19 04:58:17.380442 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-19 04:58:17.380452 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-19 04:58:17.380482 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-19 04:58:17.380493 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-19 04:58:17.380509 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-19 04:58:17.380519 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-19 04:58:17.380529 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-19 04:58:17.380541 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-19 04:58:17.380560 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-19 04:58:17.380587 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-19 04:58:17.380619 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-19 04:58:17.380637 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-19T04:56:39.000000 | 2026-02-19 04:58:17.380666 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-19 04:58:17.380693 | orchestrator | | accessIPv4 | | 2026-02-19 04:58:17.380739 | orchestrator | | accessIPv6 | | 2026-02-19 04:58:17.380760 | orchestrator | | addresses | test=192.168.112.159, 192.168.200.16 | 2026-02-19 04:58:17.380777 | orchestrator | | config_drive | | 2026-02-19 04:58:17.380794 | orchestrator | | created | 2026-02-19T04:56:14Z | 2026-02-19 04:58:17.380813 | orchestrator | | description | None | 2026-02-19 04:58:17.380843 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-19 04:58:17.380861 | orchestrator | | hostId | 3b8cf4b777c182ac41cb10890c9a3e27d6cb73fc5b6c2872c47a456f | 2026-02-19 04:58:17.380873 | orchestrator | | host_status | None | 2026-02-19 04:58:17.380894 | orchestrator | | id | b8cd6024-a7a4-4ca1-8ff0-9c7b81491f08 | 2026-02-19 04:58:17.380911 | orchestrator | | image | N/A (booted from volume) | 2026-02-19 04:58:17.380923 | orchestrator | | key_name | test | 2026-02-19 04:58:17.380934 | orchestrator | | locked | False | 2026-02-19 04:58:17.380945 | orchestrator | | locked_reason | None | 2026-02-19 04:58:17.380956 | orchestrator | | name | test-4 | 2026-02-19 04:58:17.380974 | orchestrator | | pinned_availability_zone | None | 2026-02-19 04:58:17.380986 | orchestrator | | progress | 0 | 2026-02-19 04:58:17.380997 | orchestrator | | project_id | ec1ac6dba5c14dacbcc242da667a8aa8 | 2026-02-19 04:58:17.381008 | orchestrator | | properties | hostname='test-4' | 2026-02-19 04:58:17.381027 | orchestrator | | security_groups | name='ssh' | 2026-02-19 04:58:17.381043 | orchestrator | | | name='icmp' | 2026-02-19 04:58:17.381053 | orchestrator | | server_groups | None | 2026-02-19 04:58:17.381063 | orchestrator | | status | ACTIVE | 2026-02-19 04:58:17.381073 | orchestrator | | tags | test | 2026-02-19 04:58:17.381090 | orchestrator | | trusted_image_certificates | None | 2026-02-19 04:58:17.381100 | orchestrator | | updated | 2026-02-19T04:57:10Z | 2026-02-19 04:58:17.381110 | orchestrator | | user_id | 598097b67fa4427d8c4920dfee00e418 | 2026-02-19 04:58:17.381119 | orchestrator | | volumes_attached | delete_on_termination='True', id='31e2710b-8161-434d-ad56-372110240354' | 2026-02-19 04:58:17.384247 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-19 04:58:17.676868 | orchestrator | + server_ping 2026-02-19 04:58:17.678281 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-19 04:58:17.678381 | orchestrator | ++ tr -d '\r' 2026-02-19 04:58:20.577656 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-19 04:58:20.577867 | orchestrator | + ping -c3 192.168.112.195 2026-02-19 04:58:20.589122 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-02-19 04:58:20.589219 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=6.40 ms 2026-02-19 04:58:21.587480 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.71 ms 2026-02-19 04:58:22.589561 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.81 ms 2026-02-19 04:58:22.589658 | orchestrator | 2026-02-19 04:58:22.589673 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-02-19 04:58:22.589685 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-19 04:58:22.589697 | orchestrator | rtt min/avg/max/mdev = 2.707/3.972/6.399/1.716 ms 2026-02-19 04:58:22.589708 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-19 04:58:22.589757 | orchestrator | + ping -c3 192.168.112.155 2026-02-19 04:58:22.605002 | orchestrator | PING 192.168.112.155 (192.168.112.155) 56(84) bytes of data. 2026-02-19 04:58:22.605102 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=1 ttl=63 time=10.0 ms 2026-02-19 04:58:23.599097 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=2 ttl=63 time=2.98 ms 2026-02-19 04:58:24.600589 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=3 ttl=63 time=2.34 ms 2026-02-19 04:58:24.600699 | orchestrator | 2026-02-19 04:58:24.600771 | orchestrator | --- 192.168.112.155 ping statistics --- 2026-02-19 04:58:24.600787 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-19 04:58:24.600798 | orchestrator | rtt min/avg/max/mdev = 2.340/5.106/9.995/3.466 ms 2026-02-19 04:58:24.600843 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-19 04:58:24.600855 | orchestrator | + ping -c3 192.168.112.159 2026-02-19 04:58:24.615636 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2026-02-19 04:58:24.615787 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=10.5 ms 2026-02-19 04:58:25.609797 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=3.03 ms 2026-02-19 04:58:26.610746 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.99 ms 2026-02-19 04:58:26.610851 | orchestrator | 2026-02-19 04:58:26.610865 | orchestrator | --- 192.168.112.159 ping statistics --- 2026-02-19 04:58:26.610875 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-19 04:58:26.610943 | orchestrator | rtt min/avg/max/mdev = 1.989/5.182/10.527/3.803 ms 2026-02-19 04:58:26.610958 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-19 04:58:26.610966 | orchestrator | + ping -c3 192.168.112.191 2026-02-19 04:58:26.622910 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-02-19 04:58:26.622991 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=7.36 ms 2026-02-19 04:58:27.620705 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=3.27 ms 2026-02-19 04:58:28.621665 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.05 ms 2026-02-19 04:58:28.621808 | orchestrator | 2026-02-19 04:58:28.621826 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-02-19 04:58:28.621839 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-19 04:58:28.621851 | orchestrator | rtt min/avg/max/mdev = 2.045/4.224/7.363/2.274 ms 2026-02-19 04:58:28.622133 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-19 04:58:28.622158 | orchestrator | + ping -c3 192.168.112.174 2026-02-19 04:58:28.635534 | orchestrator | PING 192.168.112.174 (192.168.112.174) 56(84) bytes of data. 2026-02-19 04:58:28.635638 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=1 ttl=63 time=9.31 ms 2026-02-19 04:58:29.630293 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=2 ttl=63 time=2.32 ms 2026-02-19 04:58:30.631588 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=3 ttl=63 time=2.17 ms 2026-02-19 04:58:30.632530 | orchestrator | 2026-02-19 04:58:30.632577 | orchestrator | --- 192.168.112.174 ping statistics --- 2026-02-19 04:58:30.632597 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-19 04:58:30.632615 | orchestrator | rtt min/avg/max/mdev = 2.172/4.600/9.307/3.328 ms 2026-02-19 04:58:30.632650 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-19 04:58:30.857685 | orchestrator | ok: Runtime: 0:08:01.866998 2026-02-19 04:58:30.907512 | 2026-02-19 04:58:30.907665 | TASK [Run tempest] 2026-02-19 04:58:31.441013 | orchestrator | skipping: Conditional result was False 2026-02-19 04:58:31.459293 | 2026-02-19 04:58:31.459456 | TASK [Check prometheus alert status] 2026-02-19 04:58:31.997249 | orchestrator | skipping: Conditional result was False 2026-02-19 04:58:32.011141 | 2026-02-19 04:58:32.011304 | PLAY [Upgrade testbed] 2026-02-19 04:58:32.022464 | 2026-02-19 04:58:32.022582 | TASK [Print next ceph version] 2026-02-19 04:58:32.101538 | orchestrator | ok 2026-02-19 04:58:32.111944 | 2026-02-19 04:58:32.112069 | TASK [Print next openstack version] 2026-02-19 04:58:32.180445 | orchestrator | ok 2026-02-19 04:58:32.191791 | 2026-02-19 04:58:32.191900 | TASK [Print next manager version] 2026-02-19 04:58:32.271326 | orchestrator | ok 2026-02-19 04:58:32.281636 | 2026-02-19 04:58:32.281756 | TASK [Set cloud fact (Zuul deployment)] 2026-02-19 04:58:32.341664 | orchestrator | ok 2026-02-19 04:58:32.353812 | 2026-02-19 04:58:32.353944 | TASK [Set cloud fact (local deployment)] 2026-02-19 04:58:32.379245 | orchestrator | skipping: Conditional result was False 2026-02-19 04:58:32.390730 | 2026-02-19 04:58:32.390886 | TASK [Fetch manager address] 2026-02-19 04:58:32.674234 | orchestrator | ok 2026-02-19 04:58:32.694555 | 2026-02-19 04:58:32.694735 | TASK [Set manager_host address] 2026-02-19 04:58:32.778597 | orchestrator | ok 2026-02-19 04:58:32.791667 | 2026-02-19 04:58:32.791808 | TASK [Run upgrade] 2026-02-19 04:58:33.467699 | orchestrator | + set -e 2026-02-19 04:58:33.467934 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-19 04:58:33.467958 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-19 04:58:33.467979 | orchestrator | + CEPH_VERSION=reef 2026-02-19 04:58:33.467992 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-19 04:58:33.468004 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-19 04:58:33.468025 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-19 04:58:33.477249 | orchestrator | + set -e 2026-02-19 04:58:33.477342 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 04:58:33.477356 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 04:58:33.477370 | orchestrator | ++ INTERACTIVE=false 2026-02-19 04:58:33.477378 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 04:58:33.477391 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 04:58:33.478582 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-19 04:58:33.522156 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-19 04:58:33.523028 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-19 04:58:33.558359 | orchestrator | 2026-02-19 04:58:33.558456 | orchestrator | # UPGRADE MANAGER 2026-02-19 04:58:33.558473 | orchestrator | 2026-02-19 04:58:33.558483 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-19 04:58:33.558493 | orchestrator | + echo 2026-02-19 04:58:33.558503 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-19 04:58:33.558514 | orchestrator | + echo 2026-02-19 04:58:33.558523 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-19 04:58:33.558533 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-19 04:58:33.558542 | orchestrator | + CEPH_VERSION=reef 2026-02-19 04:58:33.558551 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-19 04:58:33.558560 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-19 04:58:33.558569 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-19 04:58:33.565882 | orchestrator | + set -e 2026-02-19 04:58:33.565969 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-19 04:58:33.565982 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-19 04:58:33.569674 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-19 04:58:33.569802 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-19 04:58:33.572806 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-19 04:58:33.577961 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-19 04:58:33.586985 | orchestrator | /opt/configuration ~ 2026-02-19 04:58:33.587038 | orchestrator | + set -e 2026-02-19 04:58:33.587049 | orchestrator | + pushd /opt/configuration 2026-02-19 04:58:33.587058 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-19 04:58:33.587067 | orchestrator | + source /opt/venv/bin/activate 2026-02-19 04:58:33.588534 | orchestrator | ++ deactivate nondestructive 2026-02-19 04:58:33.588561 | orchestrator | ++ '[' -n '' ']' 2026-02-19 04:58:33.588568 | orchestrator | ++ '[' -n '' ']' 2026-02-19 04:58:33.588576 | orchestrator | ++ hash -r 2026-02-19 04:58:33.588583 | orchestrator | ++ '[' -n '' ']' 2026-02-19 04:58:33.588590 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-19 04:58:33.588597 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-19 04:58:33.588604 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-19 04:58:33.588612 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-19 04:58:33.588619 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-19 04:58:33.588626 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-19 04:58:33.588633 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-19 04:58:33.588650 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 04:58:33.588659 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 04:58:33.588667 | orchestrator | ++ export PATH 2026-02-19 04:58:33.589097 | orchestrator | ++ '[' -n '' ']' 2026-02-19 04:58:33.589115 | orchestrator | ++ '[' -z '' ']' 2026-02-19 04:58:33.589121 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-19 04:58:33.589128 | orchestrator | ++ PS1='(venv) ' 2026-02-19 04:58:33.589134 | orchestrator | ++ export PS1 2026-02-19 04:58:33.589141 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-19 04:58:33.589147 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-19 04:58:33.589154 | orchestrator | ++ hash -r 2026-02-19 04:58:33.589163 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-19 04:58:34.662043 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-19 04:58:34.663565 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-19 04:58:34.664971 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-19 04:58:34.666492 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-19 04:58:34.667676 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-19 04:58:34.679345 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-19 04:58:34.680927 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-19 04:58:34.682222 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-19 04:58:34.683700 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-19 04:58:34.715955 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-19 04:58:34.717745 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-19 04:58:34.719457 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-19 04:58:34.721038 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-19 04:58:34.725647 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-19 04:58:34.936171 | orchestrator | ++ which gilt 2026-02-19 04:58:34.937490 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-19 04:58:34.937516 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-19 04:58:35.226230 | orchestrator | osism.cfg-generics: 2026-02-19 04:58:35.330001 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-19 04:58:35.330693 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-19 04:58:35.331515 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-19 04:58:35.331574 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-19 04:58:36.253984 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-19 04:58:36.265263 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-19 04:58:36.602584 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-19 04:58:36.653106 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-19 04:58:36.653208 | orchestrator | + deactivate 2026-02-19 04:58:36.653223 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-19 04:58:36.653237 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 04:58:36.653247 | orchestrator | + export PATH 2026-02-19 04:58:36.653257 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-19 04:58:36.653268 | orchestrator | + '[' -n '' ']' 2026-02-19 04:58:36.653279 | orchestrator | + hash -r 2026-02-19 04:58:36.653289 | orchestrator | + '[' -n '' ']' 2026-02-19 04:58:36.653299 | orchestrator | + unset VIRTUAL_ENV 2026-02-19 04:58:36.653309 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-19 04:58:36.653337 | orchestrator | ~ 2026-02-19 04:58:36.653356 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-19 04:58:36.653374 | orchestrator | + unset -f deactivate 2026-02-19 04:58:36.653392 | orchestrator | + popd 2026-02-19 04:58:36.655508 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-19 04:58:36.655623 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-19 04:58:36.664386 | orchestrator | + set -e 2026-02-19 04:58:36.664466 | orchestrator | + NAMESPACE=kolla/release 2026-02-19 04:58:36.664482 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-19 04:58:36.672984 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-19 04:58:36.681759 | orchestrator | /opt/configuration ~ 2026-02-19 04:58:36.681863 | orchestrator | + set -e 2026-02-19 04:58:36.681879 | orchestrator | + pushd /opt/configuration 2026-02-19 04:58:36.681892 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-19 04:58:36.681904 | orchestrator | + source /opt/venv/bin/activate 2026-02-19 04:58:36.681928 | orchestrator | ++ deactivate nondestructive 2026-02-19 04:58:36.681940 | orchestrator | ++ '[' -n '' ']' 2026-02-19 04:58:36.681959 | orchestrator | ++ '[' -n '' ']' 2026-02-19 04:58:36.681970 | orchestrator | ++ hash -r 2026-02-19 04:58:36.681981 | orchestrator | ++ '[' -n '' ']' 2026-02-19 04:58:36.681992 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-19 04:58:36.682003 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-19 04:58:36.682064 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-19 04:58:36.682092 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-19 04:58:36.682104 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-19 04:58:36.682115 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-19 04:58:36.682131 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-19 04:58:36.682150 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 04:58:36.682164 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 04:58:36.682180 | orchestrator | ++ export PATH 2026-02-19 04:58:36.682192 | orchestrator | ++ '[' -n '' ']' 2026-02-19 04:58:36.682203 | orchestrator | ++ '[' -z '' ']' 2026-02-19 04:58:36.682214 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-19 04:58:36.682224 | orchestrator | ++ PS1='(venv) ' 2026-02-19 04:58:36.682235 | orchestrator | ++ export PS1 2026-02-19 04:58:36.682246 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-19 04:58:36.682564 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-19 04:58:36.682826 | orchestrator | ++ hash -r 2026-02-19 04:58:36.682845 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-19 04:58:37.227074 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-19 04:58:37.228296 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-19 04:58:37.229417 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-19 04:58:37.230598 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-19 04:58:37.231777 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-19 04:58:37.242214 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-19 04:58:37.243777 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-19 04:58:37.244676 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-19 04:58:37.246094 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-19 04:58:37.279186 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-19 04:58:37.280623 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-19 04:58:37.282423 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-19 04:58:37.283651 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-19 04:58:37.287916 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-19 04:58:37.513453 | orchestrator | ++ which gilt 2026-02-19 04:58:37.515017 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-19 04:58:37.515104 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-19 04:58:37.687542 | orchestrator | osism.cfg-generics: 2026-02-19 04:58:37.755564 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-19 04:58:37.755701 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-19 04:58:37.755781 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-19 04:58:37.755808 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-19 04:58:38.380242 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-19 04:58:38.394538 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-19 04:58:38.712753 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-19 04:58:38.767750 | orchestrator | ~ 2026-02-19 04:58:38.767874 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-19 04:58:38.767899 | orchestrator | + deactivate 2026-02-19 04:58:38.767948 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-19 04:58:38.767969 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-19 04:58:38.767985 | orchestrator | + export PATH 2026-02-19 04:58:38.768001 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-19 04:58:38.768018 | orchestrator | + '[' -n '' ']' 2026-02-19 04:58:38.768036 | orchestrator | + hash -r 2026-02-19 04:58:38.768052 | orchestrator | + '[' -n '' ']' 2026-02-19 04:58:38.768070 | orchestrator | + unset VIRTUAL_ENV 2026-02-19 04:58:38.768087 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-19 04:58:38.768103 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-19 04:58:38.768120 | orchestrator | + unset -f deactivate 2026-02-19 04:58:38.768136 | orchestrator | + popd 2026-02-19 04:58:38.769397 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-19 04:58:38.826039 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-19 04:58:38.827208 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-19 04:58:38.934744 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 04:58:38.934845 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-19 04:58:38.941550 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-19 04:58:38.949864 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-19 04:58:39.017566 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-19 04:58:39.018587 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-19 04:58:39.128598 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-19 04:58:39.128712 | orchestrator | ++ echo true 2026-02-19 04:58:39.128789 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-19 04:58:39.130717 | orchestrator | +++ semver 2024.2 2024.2 2026-02-19 04:58:39.218340 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-19 04:58:39.218444 | orchestrator | +++ semver 2024.2 2025.1 2026-02-19 04:58:39.269139 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-19 04:58:39.269246 | orchestrator | ++ echo false 2026-02-19 04:58:39.269263 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-19 04:58:39.269276 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-19 04:58:39.269287 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-19 04:58:39.269298 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-19 04:58:39.269312 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-19 04:58:39.273124 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-19 04:58:39.273204 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-19 04:58:39.285144 | orchestrator | export RABBITMQ3TO4=true 2026-02-19 04:58:39.289893 | orchestrator | + osism update manager 2026-02-19 04:58:44.958543 | orchestrator | Collecting uv 2026-02-19 04:58:45.055553 | orchestrator | Downloading uv-0.10.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-19 04:58:45.076652 | orchestrator | Downloading uv-0.10.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.1 MB) 2026-02-19 04:58:46.276620 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.1/23.1 MB 25.0 MB/s eta 0:00:00 2026-02-19 04:58:46.344431 | orchestrator | Installing collected packages: uv 2026-02-19 04:58:46.808524 | orchestrator | Successfully installed uv-0.10.4 2026-02-19 04:58:47.409230 | orchestrator | Resolved 11 packages in 295ms 2026-02-19 04:58:47.447301 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-19 04:58:47.447393 | orchestrator | Downloading cryptography (4.3MiB) 2026-02-19 04:58:47.447472 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-19 04:58:47.447971 | orchestrator | Downloading ansible (54.5MiB) 2026-02-19 04:58:47.880371 | orchestrator | Downloaded netaddr 2026-02-19 04:58:47.918601 | orchestrator | Downloaded ansible-core 2026-02-19 04:58:48.051197 | orchestrator | Downloaded cryptography 2026-02-19 04:58:54.791172 | orchestrator | Downloaded ansible 2026-02-19 04:58:54.791340 | orchestrator | Prepared 11 packages in 7.38s 2026-02-19 04:58:55.354788 | orchestrator | Installed 11 packages in 562ms 2026-02-19 04:58:55.354903 | orchestrator | + ansible==11.11.0 2026-02-19 04:58:55.354918 | orchestrator | + ansible-core==2.18.13 2026-02-19 04:58:55.354929 | orchestrator | + cffi==2.0.0 2026-02-19 04:58:55.355655 | orchestrator | + cryptography==46.0.5 2026-02-19 04:58:55.355719 | orchestrator | + jinja2==3.1.6 2026-02-19 04:58:55.355752 | orchestrator | + markupsafe==3.0.3 2026-02-19 04:58:55.355761 | orchestrator | + netaddr==1.3.0 2026-02-19 04:58:55.355769 | orchestrator | + packaging==26.0 2026-02-19 04:58:55.355776 | orchestrator | + pycparser==3.0 2026-02-19 04:58:55.355784 | orchestrator | + pyyaml==6.0.3 2026-02-19 04:58:55.355792 | orchestrator | + resolvelib==1.0.1 2026-02-19 04:58:56.440694 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-199227jo895rr1/tmph5jqu3o4/ansible-collection-servicesmnh4rytb'... 2026-02-19 04:58:57.992535 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-19 04:58:57.992662 | orchestrator | Already on 'main' 2026-02-19 04:58:58.519603 | orchestrator | Starting galaxy collection install process 2026-02-19 04:58:58.519813 | orchestrator | Process install dependency map 2026-02-19 04:58:58.519845 | orchestrator | Starting collection install process 2026-02-19 04:58:58.519866 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-19 04:58:58.519886 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-19 04:58:58.519904 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-19 04:58:59.032352 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-199253di8rshcf/tmplsnqjpor/ansible-playbooks-manager6qatwe3e'... 2026-02-19 04:58:59.669660 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-19 04:58:59.669794 | orchestrator | Already on 'main' 2026-02-19 04:58:59.917634 | orchestrator | Starting galaxy collection install process 2026-02-19 04:58:59.917775 | orchestrator | Process install dependency map 2026-02-19 04:58:59.917795 | orchestrator | Starting collection install process 2026-02-19 04:58:59.917808 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-19 04:58:59.917821 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-19 04:58:59.917833 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-19 04:59:00.548710 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-19 04:59:00.548935 | orchestrator | -vvvv to see details 2026-02-19 04:59:00.961171 | orchestrator | 2026-02-19 04:59:00.961288 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-19 04:59:00.961309 | orchestrator | 2026-02-19 04:59:00.961319 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-19 04:59:04.968121 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:04.968210 | orchestrator | 2026-02-19 04:59:04.968222 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-19 04:59:05.041409 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 04:59:05.041495 | orchestrator | 2026-02-19 04:59:05.041525 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-19 04:59:06.758091 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:06.758238 | orchestrator | 2026-02-19 04:59:06.758258 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-19 04:59:06.828869 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:06.828987 | orchestrator | 2026-02-19 04:59:06.829012 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-19 04:59:06.913999 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-19 04:59:06.914164 | orchestrator | 2026-02-19 04:59:06.914181 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-19 04:59:11.270888 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-19 04:59:11.270981 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-19 04:59:11.270994 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-19 04:59:11.271017 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-19 04:59:11.271027 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-19 04:59:11.271037 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-19 04:59:11.271046 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-19 04:59:11.271056 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-19 04:59:11.271066 | orchestrator | 2026-02-19 04:59:11.271078 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-19 04:59:12.349200 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:12.349347 | orchestrator | 2026-02-19 04:59:12.349366 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-19 04:59:13.328177 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:13.328291 | orchestrator | 2026-02-19 04:59:13.328315 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-19 04:59:13.414554 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-19 04:59:13.414646 | orchestrator | 2026-02-19 04:59:13.414659 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-19 04:59:15.233258 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-19 04:59:15.233330 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-19 04:59:15.233336 | orchestrator | 2026-02-19 04:59:15.233352 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-19 04:59:16.200580 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:16.200673 | orchestrator | 2026-02-19 04:59:16.200694 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-19 04:59:16.256881 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:59:16.257037 | orchestrator | 2026-02-19 04:59:16.257059 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-19 04:59:16.357804 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-19 04:59:16.357899 | orchestrator | 2026-02-19 04:59:16.357913 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-19 04:59:17.307521 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:17.307630 | orchestrator | 2026-02-19 04:59:17.307646 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-19 04:59:17.376106 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-19 04:59:17.376209 | orchestrator | 2026-02-19 04:59:17.376224 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-19 04:59:19.284570 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-19 04:59:19.284681 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-19 04:59:19.284696 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:19.284710 | orchestrator | 2026-02-19 04:59:19.284722 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-19 04:59:20.217445 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:20.217541 | orchestrator | 2026-02-19 04:59:20.217555 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-19 04:59:20.287174 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:59:20.287278 | orchestrator | 2026-02-19 04:59:20.287294 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-19 04:59:20.400984 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-19 04:59:20.401060 | orchestrator | 2026-02-19 04:59:20.401070 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-19 04:59:21.070360 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:21.070496 | orchestrator | 2026-02-19 04:59:21.070516 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-19 04:59:21.601364 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:21.601466 | orchestrator | 2026-02-19 04:59:21.601484 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-19 04:59:23.475046 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-19 04:59:23.475194 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-19 04:59:23.475223 | orchestrator | 2026-02-19 04:59:23.475246 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-19 04:59:24.627272 | orchestrator | changed: [testbed-manager] 2026-02-19 04:59:24.627401 | orchestrator | 2026-02-19 04:59:24.627425 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-19 04:59:25.216473 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:25.216563 | orchestrator | 2026-02-19 04:59:25.216574 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-19 04:59:25.782155 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:25.782271 | orchestrator | 2026-02-19 04:59:25.782323 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-19 04:59:25.848692 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:59:25.848858 | orchestrator | 2026-02-19 04:59:25.848880 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-19 04:59:25.918655 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-19 04:59:25.918726 | orchestrator | 2026-02-19 04:59:25.918733 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-19 04:59:25.971710 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:25.971829 | orchestrator | 2026-02-19 04:59:25.971844 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-19 04:59:28.854938 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-19 04:59:28.855074 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-19 04:59:28.855092 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-19 04:59:28.855105 | orchestrator | 2026-02-19 04:59:28.855118 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-19 04:59:29.852426 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:29.852522 | orchestrator | 2026-02-19 04:59:29.852537 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-19 04:59:30.842717 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:30.842888 | orchestrator | 2026-02-19 04:59:30.842906 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-19 04:59:31.827069 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:31.827177 | orchestrator | 2026-02-19 04:59:31.827196 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-19 04:59:31.916308 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-19 04:59:31.916440 | orchestrator | 2026-02-19 04:59:31.916469 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-19 04:59:31.976897 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:31.977021 | orchestrator | 2026-02-19 04:59:31.977056 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-19 04:59:32.990237 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-19 04:59:32.990334 | orchestrator | 2026-02-19 04:59:32.990351 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-19 04:59:33.091391 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-19 04:59:33.091467 | orchestrator | 2026-02-19 04:59:33.091477 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-19 04:59:34.053995 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:34.054177 | orchestrator | 2026-02-19 04:59:34.054196 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-19 04:59:35.079495 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:35.079584 | orchestrator | 2026-02-19 04:59:35.079595 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-19 04:59:35.164458 | orchestrator | skipping: [testbed-manager] 2026-02-19 04:59:35.164590 | orchestrator | 2026-02-19 04:59:35.164615 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-19 04:59:35.235458 | orchestrator | ok: [testbed-manager] 2026-02-19 04:59:35.235540 | orchestrator | 2026-02-19 04:59:35.235552 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-19 04:59:36.613329 | orchestrator | changed: [testbed-manager] 2026-02-19 04:59:36.613430 | orchestrator | 2026-02-19 04:59:36.613447 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-19 05:00:46.266961 | orchestrator | changed: [testbed-manager] 2026-02-19 05:00:46.267102 | orchestrator | 2026-02-19 05:00:46.267121 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-19 05:00:47.439285 | orchestrator | ok: [testbed-manager] 2026-02-19 05:00:47.439384 | orchestrator | 2026-02-19 05:00:47.439401 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-19 05:00:47.497991 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:00:47.498166 | orchestrator | 2026-02-19 05:00:47.498194 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-19 05:00:48.384468 | orchestrator | ok: [testbed-manager] 2026-02-19 05:00:48.384554 | orchestrator | 2026-02-19 05:00:48.384564 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-19 05:00:48.472082 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:00:48.472164 | orchestrator | 2026-02-19 05:00:48.472175 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-19 05:00:48.472183 | orchestrator | 2026-02-19 05:00:48.472190 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-19 05:01:03.457853 | orchestrator | changed: [testbed-manager] 2026-02-19 05:01:03.457986 | orchestrator | 2026-02-19 05:01:03.458002 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-19 05:02:03.524108 | orchestrator | Pausing for 60 seconds 2026-02-19 05:02:03.524259 | orchestrator | changed: [testbed-manager] 2026-02-19 05:02:03.524286 | orchestrator | 2026-02-19 05:02:03.524308 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-19 05:02:03.574697 | orchestrator | ok: [testbed-manager] 2026-02-19 05:02:03.574839 | orchestrator | 2026-02-19 05:02:03.574854 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-19 05:02:07.069505 | orchestrator | changed: [testbed-manager] 2026-02-19 05:02:07.069593 | orchestrator | 2026-02-19 05:02:07.069607 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-19 05:03:09.884473 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-19 05:03:09.884590 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-19 05:03:09.884607 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-19 05:03:09.884621 | orchestrator | changed: [testbed-manager] 2026-02-19 05:03:09.884635 | orchestrator | 2026-02-19 05:03:09.884647 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-19 05:03:21.270532 | orchestrator | changed: [testbed-manager] 2026-02-19 05:03:21.270643 | orchestrator | 2026-02-19 05:03:21.270662 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-19 05:03:21.368793 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-19 05:03:21.368956 | orchestrator | 2026-02-19 05:03:21.368972 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-19 05:03:21.368981 | orchestrator | 2026-02-19 05:03:21.368990 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-19 05:03:21.442533 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:03:21.442632 | orchestrator | 2026-02-19 05:03:21.442648 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-19 05:03:21.525012 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-19 05:03:21.525095 | orchestrator | 2026-02-19 05:03:21.525126 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-19 05:03:22.671702 | orchestrator | changed: [testbed-manager] 2026-02-19 05:03:22.671791 | orchestrator | 2026-02-19 05:03:22.671801 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-19 05:03:26.263484 | orchestrator | ok: [testbed-manager] 2026-02-19 05:03:26.263575 | orchestrator | 2026-02-19 05:03:26.263589 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-19 05:03:26.350311 | orchestrator | ok: [testbed-manager] => { 2026-02-19 05:03:26.350406 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-19 05:03:26.350420 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-19 05:03:26.350430 | orchestrator | "Checking running containers against expected versions...", 2026-02-19 05:03:26.350441 | orchestrator | "", 2026-02-19 05:03:26.350452 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-19 05:03:26.350462 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-19 05:03:26.350473 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.350483 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-19 05:03:26.350493 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.350503 | orchestrator | "", 2026-02-19 05:03:26.350513 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-19 05:03:26.350523 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-19 05:03:26.350533 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.350543 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-19 05:03:26.350553 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.350562 | orchestrator | "", 2026-02-19 05:03:26.350572 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-19 05:03:26.350582 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-19 05:03:26.350591 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.350601 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-19 05:03:26.350611 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.350620 | orchestrator | "", 2026-02-19 05:03:26.350630 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-19 05:03:26.350640 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-19 05:03:26.350649 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.350659 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-19 05:03:26.350669 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.350678 | orchestrator | "", 2026-02-19 05:03:26.350688 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-19 05:03:26.350698 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-19 05:03:26.350707 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.350723 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-19 05:03:26.350738 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.350752 | orchestrator | "", 2026-02-19 05:03:26.350765 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-19 05:03:26.350806 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.350852 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.350869 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.350881 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.350892 | orchestrator | "", 2026-02-19 05:03:26.350903 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-19 05:03:26.350915 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-19 05:03:26.350926 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.350936 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-19 05:03:26.350947 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.350958 | orchestrator | "", 2026-02-19 05:03:26.350969 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-19 05:03:26.350980 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-19 05:03:26.350991 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.351013 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-19 05:03:26.351024 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.351035 | orchestrator | "", 2026-02-19 05:03:26.351046 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-19 05:03:26.351058 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-19 05:03:26.351069 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.351081 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-19 05:03:26.351092 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.351104 | orchestrator | "", 2026-02-19 05:03:26.351118 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-19 05:03:26.351130 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-19 05:03:26.351141 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.351152 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-19 05:03:26.351163 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.351175 | orchestrator | "", 2026-02-19 05:03:26.351186 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-19 05:03:26.351196 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351205 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.351215 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351224 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.351234 | orchestrator | "", 2026-02-19 05:03:26.351243 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-19 05:03:26.351253 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351262 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.351272 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351281 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.351291 | orchestrator | "", 2026-02-19 05:03:26.351300 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-19 05:03:26.351310 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351319 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.351329 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351338 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.351348 | orchestrator | "", 2026-02-19 05:03:26.351357 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-19 05:03:26.351367 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351376 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.351386 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351413 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.351423 | orchestrator | "", 2026-02-19 05:03:26.351433 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-19 05:03:26.351442 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351460 | orchestrator | " Enabled: true", 2026-02-19 05:03:26.351469 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-19 05:03:26.351479 | orchestrator | " Status: ✅ MATCH", 2026-02-19 05:03:26.351489 | orchestrator | "", 2026-02-19 05:03:26.351498 | orchestrator | "=== Summary ===", 2026-02-19 05:03:26.351508 | orchestrator | "Errors (version mismatches): 0", 2026-02-19 05:03:26.351518 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-19 05:03:26.351527 | orchestrator | "", 2026-02-19 05:03:26.351537 | orchestrator | "✅ All running containers match expected versions!" 2026-02-19 05:03:26.351553 | orchestrator | ] 2026-02-19 05:03:26.351568 | orchestrator | } 2026-02-19 05:03:26.351585 | orchestrator | 2026-02-19 05:03:26.351601 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-19 05:03:26.414374 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:03:26.414494 | orchestrator | 2026-02-19 05:03:26.414513 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:03:26.414527 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-19 05:03:26.414539 | orchestrator | 2026-02-19 05:03:38.990661 | orchestrator | 2026-02-19 05:03:38 | INFO  | Task ff2aac72-1450-4e00-8d88-9a63685ec9df (sync inventory) is running in background. Output coming soon. 2026-02-19 05:04:08.037626 | orchestrator | 2026-02-19 05:03:40 | INFO  | Starting group_vars file reorganization 2026-02-19 05:04:08.037762 | orchestrator | 2026-02-19 05:03:40 | INFO  | Moved 0 file(s) to their respective directories 2026-02-19 05:04:08.037780 | orchestrator | 2026-02-19 05:03:40 | INFO  | Group_vars file reorganization completed 2026-02-19 05:04:08.037814 | orchestrator | 2026-02-19 05:03:43 | INFO  | Starting variable preparation from inventory 2026-02-19 05:04:08.037877 | orchestrator | 2026-02-19 05:03:46 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-19 05:04:08.037890 | orchestrator | 2026-02-19 05:03:46 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-19 05:04:08.037902 | orchestrator | 2026-02-19 05:03:46 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-19 05:04:08.037913 | orchestrator | 2026-02-19 05:03:46 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-19 05:04:08.037924 | orchestrator | 2026-02-19 05:03:46 | INFO  | Variable preparation completed 2026-02-19 05:04:08.037936 | orchestrator | 2026-02-19 05:03:48 | INFO  | Starting inventory overwrite handling 2026-02-19 05:04:08.037947 | orchestrator | 2026-02-19 05:03:48 | INFO  | Handling group overwrites in 99-overwrite 2026-02-19 05:04:08.037958 | orchestrator | 2026-02-19 05:03:48 | INFO  | Removing group frr:children from 60-generic 2026-02-19 05:04:08.037969 | orchestrator | 2026-02-19 05:03:48 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-19 05:04:08.037980 | orchestrator | 2026-02-19 05:03:48 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-19 05:04:08.037993 | orchestrator | 2026-02-19 05:03:48 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-19 05:04:08.038012 | orchestrator | 2026-02-19 05:03:48 | INFO  | Handling group overwrites in 20-roles 2026-02-19 05:04:08.038098 | orchestrator | 2026-02-19 05:03:48 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-19 05:04:08.038110 | orchestrator | 2026-02-19 05:03:48 | INFO  | Removed 5 group(s) in total 2026-02-19 05:04:08.038121 | orchestrator | 2026-02-19 05:03:48 | INFO  | Inventory overwrite handling completed 2026-02-19 05:04:08.038133 | orchestrator | 2026-02-19 05:03:49 | INFO  | Starting merge of inventory files 2026-02-19 05:04:08.038147 | orchestrator | 2026-02-19 05:03:49 | INFO  | Inventory files merged successfully 2026-02-19 05:04:08.038186 | orchestrator | 2026-02-19 05:03:54 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-19 05:04:08.038201 | orchestrator | 2026-02-19 05:04:06 | INFO  | Successfully wrote ClusterShell configuration 2026-02-19 05:04:08.382346 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-19 05:04:08.382445 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-19 05:04:08.382461 | orchestrator | + local max_attempts=60 2026-02-19 05:04:08.382475 | orchestrator | + local name=kolla-ansible 2026-02-19 05:04:08.382486 | orchestrator | + local attempt_num=1 2026-02-19 05:04:08.383366 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-19 05:04:08.423464 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-19 05:04:08.423558 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-19 05:04:08.423574 | orchestrator | + local max_attempts=60 2026-02-19 05:04:08.423587 | orchestrator | + local name=osism-ansible 2026-02-19 05:04:08.423604 | orchestrator | + local attempt_num=1 2026-02-19 05:04:08.423804 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-19 05:04:08.453374 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-19 05:04:08.453480 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-19 05:04:08.651195 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-19 05:04:08.651305 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-19 05:04:08.651321 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-19 05:04:08.651332 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-19 05:04:08.651346 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-19 05:04:08.651355 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-19 05:04:08.651364 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-19 05:04:08.651373 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-19 05:04:08.651382 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 19 seconds ago 2026-02-19 05:04:08.651390 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-19 05:04:08.651399 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-19 05:04:08.651408 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-19 05:04:08.651417 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-19 05:04:08.651448 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-19 05:04:08.651458 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-19 05:04:08.651466 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-19 05:04:08.656674 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-19 05:04:08.656735 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-19 05:04:08.656746 | orchestrator | + osism apply facts 2026-02-19 05:04:20.792525 | orchestrator | 2026-02-19 05:04:20 | INFO  | Task 600c8b38-7226-4152-a98c-378ea85bb8c1 (facts) was prepared for execution. 2026-02-19 05:04:20.792636 | orchestrator | 2026-02-19 05:04:20 | INFO  | It takes a moment until task 600c8b38-7226-4152-a98c-378ea85bb8c1 (facts) has been started and output is visible here. 2026-02-19 05:04:39.620711 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-19 05:04:39.620902 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-19 05:04:39.620954 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-19 05:04:39.620974 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-19 05:04:39.621014 | orchestrator | 2026-02-19 05:04:39.621034 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-19 05:04:39.621053 | orchestrator | 2026-02-19 05:04:39.621072 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-19 05:04:39.621088 | orchestrator | Thursday 19 February 2026 05:04:26 +0000 (0:00:01.768) 0:00:01.768 ***** 2026-02-19 05:04:39.621100 | orchestrator | ok: [testbed-manager] 2026-02-19 05:04:39.621112 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:04:39.621123 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:04:39.621134 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:04:39.621144 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:04:39.621155 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:04:39.621166 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:04:39.621177 | orchestrator | 2026-02-19 05:04:39.621188 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-19 05:04:39.621199 | orchestrator | Thursday 19 February 2026 05:04:29 +0000 (0:00:02.289) 0:00:04.057 ***** 2026-02-19 05:04:39.621210 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:04:39.621223 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:04:39.621257 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:04:39.621270 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:04:39.621295 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:04:39.621316 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:04:39.621333 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:04:39.621346 | orchestrator | 2026-02-19 05:04:39.621359 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-19 05:04:39.621372 | orchestrator | 2026-02-19 05:04:39.621385 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-19 05:04:39.621397 | orchestrator | Thursday 19 February 2026 05:04:31 +0000 (0:00:01.837) 0:00:05.895 ***** 2026-02-19 05:04:39.621410 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:04:39.621422 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:04:39.621435 | orchestrator | ok: [testbed-manager] 2026-02-19 05:04:39.621447 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:04:39.621484 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:04:39.621497 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:04:39.621509 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:04:39.621521 | orchestrator | 2026-02-19 05:04:39.621535 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-19 05:04:39.621547 | orchestrator | 2026-02-19 05:04:39.621559 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-19 05:04:39.621572 | orchestrator | Thursday 19 February 2026 05:04:37 +0000 (0:00:06.372) 0:00:12.267 ***** 2026-02-19 05:04:39.621584 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:04:39.621597 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:04:39.621610 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:04:39.621621 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:04:39.621631 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:04:39.621642 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:04:39.621653 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:04:39.621663 | orchestrator | 2026-02-19 05:04:39.621674 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:04:39.621685 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 05:04:39.621698 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 05:04:39.621709 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 05:04:39.621720 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 05:04:39.621730 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 05:04:39.621741 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 05:04:39.621752 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 05:04:39.621764 | orchestrator | 2026-02-19 05:04:39.621783 | orchestrator | 2026-02-19 05:04:39.621802 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:04:39.621821 | orchestrator | Thursday 19 February 2026 05:04:39 +0000 (0:00:01.664) 0:00:13.932 ***** 2026-02-19 05:04:39.621868 | orchestrator | =============================================================================== 2026-02-19 05:04:39.621886 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.37s 2026-02-19 05:04:39.621904 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.29s 2026-02-19 05:04:39.621995 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.84s 2026-02-19 05:04:39.622085 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.66s 2026-02-19 05:04:39.941549 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-19 05:04:40.038001 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 05:04:40.038786 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-19 05:04:40.085816 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-19 05:04:40.085944 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-19 05:04:40.093180 | orchestrator | + set -e 2026-02-19 05:04:40.093394 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-19 05:04:40.093415 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-19 05:04:40.104009 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-19 05:04:40.114728 | orchestrator | 2026-02-19 05:04:40.114801 | orchestrator | # UPGRADE SERVICES 2026-02-19 05:04:40.114884 | orchestrator | 2026-02-19 05:04:40.114897 | orchestrator | + set -e 2026-02-19 05:04:40.114909 | orchestrator | + echo 2026-02-19 05:04:40.114920 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-19 05:04:40.114931 | orchestrator | + echo 2026-02-19 05:04:40.114942 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 05:04:40.115971 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 05:04:40.116020 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 05:04:40.116032 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 05:04:40.116043 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 05:04:40.116054 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 05:04:40.116066 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 05:04:40.116078 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 05:04:40.116089 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 05:04:40.116100 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 05:04:40.116111 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 05:04:40.116122 | orchestrator | ++ export ARA=false 2026-02-19 05:04:40.116133 | orchestrator | ++ ARA=false 2026-02-19 05:04:40.116148 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 05:04:40.116167 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 05:04:40.116193 | orchestrator | ++ export TEMPEST=false 2026-02-19 05:04:40.116213 | orchestrator | ++ TEMPEST=false 2026-02-19 05:04:40.116231 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 05:04:40.116248 | orchestrator | ++ IS_ZUUL=true 2026-02-19 05:04:40.116265 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 05:04:40.116281 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 05:04:40.116299 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 05:04:40.116316 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 05:04:40.116334 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 05:04:40.116351 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 05:04:40.116370 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 05:04:40.116389 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 05:04:40.116408 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 05:04:40.116424 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 05:04:40.116435 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-19 05:04:40.116446 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-19 05:04:40.116563 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-19 05:04:40.116581 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-19 05:04:40.116592 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-19 05:04:40.126459 | orchestrator | + set -e 2026-02-19 05:04:40.126531 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 05:04:40.127651 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 05:04:40.127686 | orchestrator | ++ INTERACTIVE=false 2026-02-19 05:04:40.127711 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 05:04:40.127722 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 05:04:40.127733 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 05:04:40.127744 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 05:04:40.127755 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 05:04:40.127766 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 05:04:40.127777 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 05:04:40.127788 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 05:04:40.127800 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 05:04:40.127811 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 05:04:40.127823 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 05:04:40.127861 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 05:04:40.127872 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 05:04:40.127883 | orchestrator | ++ export ARA=false 2026-02-19 05:04:40.127894 | orchestrator | ++ ARA=false 2026-02-19 05:04:40.127905 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 05:04:40.127916 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 05:04:40.127927 | orchestrator | ++ export TEMPEST=false 2026-02-19 05:04:40.127938 | orchestrator | ++ TEMPEST=false 2026-02-19 05:04:40.127949 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 05:04:40.127960 | orchestrator | ++ IS_ZUUL=true 2026-02-19 05:04:40.127972 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 05:04:40.127983 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 05:04:40.127994 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 05:04:40.128005 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 05:04:40.128016 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 05:04:40.128026 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 05:04:40.128037 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 05:04:40.128048 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 05:04:40.128059 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 05:04:40.128070 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 05:04:40.128106 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-19 05:04:40.128117 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-19 05:04:40.128129 | orchestrator | + echo 2026-02-19 05:04:40.128140 | orchestrator | 2026-02-19 05:04:40.128152 | orchestrator | # PULL IMAGES 2026-02-19 05:04:40.128163 | orchestrator | + echo '# PULL IMAGES' 2026-02-19 05:04:40.128174 | orchestrator | + echo 2026-02-19 05:04:40.128185 | orchestrator | 2026-02-19 05:04:40.129324 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-19 05:04:40.191308 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 05:04:40.191398 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-19 05:04:42.362975 | orchestrator | 2026-02-19 05:04:42 | INFO  | Trying to run play pull-images in environment custom 2026-02-19 05:04:52.533378 | orchestrator | 2026-02-19 05:04:52 | INFO  | Task 5dfbeee2-b38b-42a2-b3e4-64f4cbafa8d5 (pull-images) was prepared for execution. 2026-02-19 05:04:52.533468 | orchestrator | 2026-02-19 05:04:52 | INFO  | Task 5dfbeee2-b38b-42a2-b3e4-64f4cbafa8d5 is running in background. No more output. Check ARA for logs. 2026-02-19 05:04:52.885997 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-19 05:04:52.893896 | orchestrator | + set -e 2026-02-19 05:04:52.893967 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 05:04:52.893976 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 05:04:52.893984 | orchestrator | ++ INTERACTIVE=false 2026-02-19 05:04:52.893991 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 05:04:52.893997 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 05:04:52.894004 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-19 05:04:52.895287 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-19 05:04:52.902176 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-19 05:04:52.902259 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-19 05:04:52.902455 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-19 05:04:52.939051 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-19 05:04:52.939140 | orchestrator | + osism apply frr 2026-02-19 05:05:05.149724 | orchestrator | 2026-02-19 05:05:05 | INFO  | Task 9692e241-8a82-47be-aa14-80f1054088c7 (frr) was prepared for execution. 2026-02-19 05:05:05.149815 | orchestrator | 2026-02-19 05:05:05 | INFO  | It takes a moment until task 9692e241-8a82-47be-aa14-80f1054088c7 (frr) has been started and output is visible here. 2026-02-19 05:05:37.392741 | orchestrator | 2026-02-19 05:05:37.392937 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-19 05:05:37.392961 | orchestrator | 2026-02-19 05:05:37.392973 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-19 05:05:37.392985 | orchestrator | Thursday 19 February 2026 05:05:13 +0000 (0:00:03.589) 0:00:03.589 ***** 2026-02-19 05:05:37.392997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 05:05:37.393009 | orchestrator | 2026-02-19 05:05:37.393020 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-19 05:05:37.393032 | orchestrator | Thursday 19 February 2026 05:05:15 +0000 (0:00:02.044) 0:00:05.634 ***** 2026-02-19 05:05:37.393046 | orchestrator | ok: [testbed-manager] 2026-02-19 05:05:37.393070 | orchestrator | 2026-02-19 05:05:37.393098 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-19 05:05:37.393116 | orchestrator | Thursday 19 February 2026 05:05:17 +0000 (0:00:02.121) 0:00:07.755 ***** 2026-02-19 05:05:37.393134 | orchestrator | ok: [testbed-manager] 2026-02-19 05:05:37.393152 | orchestrator | 2026-02-19 05:05:37.393170 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-19 05:05:37.393188 | orchestrator | Thursday 19 February 2026 05:05:20 +0000 (0:00:02.720) 0:00:10.476 ***** 2026-02-19 05:05:37.393205 | orchestrator | ok: [testbed-manager] 2026-02-19 05:05:37.393223 | orchestrator | 2026-02-19 05:05:37.393242 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-19 05:05:37.393262 | orchestrator | Thursday 19 February 2026 05:05:22 +0000 (0:00:01.916) 0:00:12.392 ***** 2026-02-19 05:05:37.393283 | orchestrator | ok: [testbed-manager] 2026-02-19 05:05:37.393337 | orchestrator | 2026-02-19 05:05:37.393358 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-19 05:05:37.393376 | orchestrator | Thursday 19 February 2026 05:05:24 +0000 (0:00:01.934) 0:00:14.327 ***** 2026-02-19 05:05:37.393396 | orchestrator | ok: [testbed-manager] 2026-02-19 05:05:37.393417 | orchestrator | 2026-02-19 05:05:37.393435 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-19 05:05:37.393455 | orchestrator | Thursday 19 February 2026 05:05:26 +0000 (0:00:02.406) 0:00:16.734 ***** 2026-02-19 05:05:37.393476 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:05:37.393496 | orchestrator | 2026-02-19 05:05:37.393514 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-19 05:05:37.393527 | orchestrator | Thursday 19 February 2026 05:05:27 +0000 (0:00:01.155) 0:00:17.889 ***** 2026-02-19 05:05:37.393540 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:05:37.393553 | orchestrator | 2026-02-19 05:05:37.393566 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-19 05:05:37.393577 | orchestrator | Thursday 19 February 2026 05:05:28 +0000 (0:00:01.141) 0:00:19.031 ***** 2026-02-19 05:05:37.393590 | orchestrator | ok: [testbed-manager] 2026-02-19 05:05:37.393602 | orchestrator | 2026-02-19 05:05:37.393615 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-19 05:05:37.393628 | orchestrator | Thursday 19 February 2026 05:05:30 +0000 (0:00:01.968) 0:00:20.999 ***** 2026-02-19 05:05:37.393638 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-19 05:05:37.393669 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-19 05:05:37.393696 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-19 05:05:37.393708 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-19 05:05:37.393719 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-19 05:05:37.393730 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-19 05:05:37.393741 | orchestrator | 2026-02-19 05:05:37.393752 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-19 05:05:37.393763 | orchestrator | Thursday 19 February 2026 05:05:34 +0000 (0:00:03.513) 0:00:24.513 ***** 2026-02-19 05:05:37.393773 | orchestrator | ok: [testbed-manager] 2026-02-19 05:05:37.393784 | orchestrator | 2026-02-19 05:05:37.393795 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:05:37.393806 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 05:05:37.393817 | orchestrator | 2026-02-19 05:05:37.393828 | orchestrator | 2026-02-19 05:05:37.393839 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:05:37.393876 | orchestrator | Thursday 19 February 2026 05:05:37 +0000 (0:00:02.644) 0:00:27.158 ***** 2026-02-19 05:05:37.393888 | orchestrator | =============================================================================== 2026-02-19 05:05:37.393899 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.51s 2026-02-19 05:05:37.393910 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.72s 2026-02-19 05:05:37.393920 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.65s 2026-02-19 05:05:37.393931 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.41s 2026-02-19 05:05:37.393941 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.12s 2026-02-19 05:05:37.393952 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.04s 2026-02-19 05:05:37.393962 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.97s 2026-02-19 05:05:37.393984 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.93s 2026-02-19 05:05:37.394083 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.92s 2026-02-19 05:05:37.394100 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.16s 2026-02-19 05:05:37.394111 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.14s 2026-02-19 05:05:37.719451 | orchestrator | + osism apply kubernetes 2026-02-19 05:05:39.838381 | orchestrator | 2026-02-19 05:05:39 | INFO  | Task f9a4a2f6-82ab-490c-b189-a8c13a530f85 (kubernetes) was prepared for execution. 2026-02-19 05:05:39.838495 | orchestrator | 2026-02-19 05:05:39 | INFO  | It takes a moment until task f9a4a2f6-82ab-490c-b189-a8c13a530f85 (kubernetes) has been started and output is visible here. 2026-02-19 05:06:25.228010 | orchestrator | 2026-02-19 05:06:25.228133 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-19 05:06:25.228150 | orchestrator | 2026-02-19 05:06:25.228161 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-19 05:06:25.228171 | orchestrator | Thursday 19 February 2026 05:05:46 +0000 (0:00:01.987) 0:00:01.987 ***** 2026-02-19 05:06:25.228181 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:06:25.228191 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:06:25.228200 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:06:25.228209 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:06:25.228217 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:06:25.228226 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:06:25.228235 | orchestrator | 2026-02-19 05:06:25.228244 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-19 05:06:25.228253 | orchestrator | Thursday 19 February 2026 05:05:50 +0000 (0:00:04.459) 0:00:06.446 ***** 2026-02-19 05:06:25.228261 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.228271 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.228280 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.228288 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.228297 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.228306 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.228314 | orchestrator | 2026-02-19 05:06:25.228323 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-19 05:06:25.228333 | orchestrator | Thursday 19 February 2026 05:05:52 +0000 (0:00:01.889) 0:00:08.336 ***** 2026-02-19 05:06:25.228342 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.228351 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.228360 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.228369 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.228377 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.228386 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.228395 | orchestrator | 2026-02-19 05:06:25.228404 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-19 05:06:25.228412 | orchestrator | Thursday 19 February 2026 05:05:54 +0000 (0:00:02.060) 0:00:10.396 ***** 2026-02-19 05:06:25.228421 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:06:25.228430 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:06:25.228439 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:06:25.228447 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:06:25.228456 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:06:25.228465 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:06:25.228473 | orchestrator | 2026-02-19 05:06:25.228482 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-19 05:06:25.228491 | orchestrator | Thursday 19 February 2026 05:05:57 +0000 (0:00:02.777) 0:00:13.173 ***** 2026-02-19 05:06:25.228502 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:06:25.228512 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:06:25.228521 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:06:25.228531 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:06:25.228563 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:06:25.228573 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:06:25.228583 | orchestrator | 2026-02-19 05:06:25.228593 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-19 05:06:25.228603 | orchestrator | Thursday 19 February 2026 05:06:00 +0000 (0:00:02.445) 0:00:15.618 ***** 2026-02-19 05:06:25.228614 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:06:25.228624 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:06:25.228633 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:06:25.228643 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:06:25.228652 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:06:25.228663 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:06:25.228672 | orchestrator | 2026-02-19 05:06:25.228682 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-19 05:06:25.228692 | orchestrator | Thursday 19 February 2026 05:06:02 +0000 (0:00:02.821) 0:00:18.440 ***** 2026-02-19 05:06:25.228703 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.228713 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.228723 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.228733 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.228744 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.228754 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.228763 | orchestrator | 2026-02-19 05:06:25.228773 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-19 05:06:25.228783 | orchestrator | Thursday 19 February 2026 05:06:05 +0000 (0:00:02.136) 0:00:20.577 ***** 2026-02-19 05:06:25.228793 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.228803 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.228813 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.228823 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.228845 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.228854 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.228916 | orchestrator | 2026-02-19 05:06:25.228926 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-19 05:06:25.228935 | orchestrator | Thursday 19 February 2026 05:06:07 +0000 (0:00:02.166) 0:00:22.743 ***** 2026-02-19 05:06:25.228944 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 05:06:25.228952 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 05:06:25.228961 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.228970 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 05:06:25.228978 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 05:06:25.228986 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.228995 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 05:06:25.229004 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 05:06:25.229012 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.229021 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 05:06:25.229029 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 05:06:25.229038 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.229064 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 05:06:25.229073 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 05:06:25.229082 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.229091 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-19 05:06:25.229099 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-19 05:06:25.229107 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.229116 | orchestrator | 2026-02-19 05:06:25.229125 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-19 05:06:25.229141 | orchestrator | Thursday 19 February 2026 05:06:09 +0000 (0:00:02.107) 0:00:24.851 ***** 2026-02-19 05:06:25.229150 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.229158 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.229167 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.229175 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.229184 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.229192 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.229201 | orchestrator | 2026-02-19 05:06:25.229209 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-19 05:06:25.229219 | orchestrator | Thursday 19 February 2026 05:06:11 +0000 (0:00:02.580) 0:00:27.431 ***** 2026-02-19 05:06:25.229228 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:06:25.229236 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:06:25.229245 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:06:25.229253 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:06:25.229262 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:06:25.229270 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:06:25.229279 | orchestrator | 2026-02-19 05:06:25.229287 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-19 05:06:25.229296 | orchestrator | Thursday 19 February 2026 05:06:13 +0000 (0:00:02.112) 0:00:29.543 ***** 2026-02-19 05:06:25.229304 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:06:25.229313 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:06:25.229321 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:06:25.229330 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:06:25.229338 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:06:25.229346 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:06:25.229355 | orchestrator | 2026-02-19 05:06:25.229363 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-19 05:06:25.229372 | orchestrator | Thursday 19 February 2026 05:06:16 +0000 (0:00:02.818) 0:00:32.362 ***** 2026-02-19 05:06:25.229381 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.229389 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.229398 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.229406 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.229414 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.229423 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.229431 | orchestrator | 2026-02-19 05:06:25.229440 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-19 05:06:25.229449 | orchestrator | Thursday 19 February 2026 05:06:18 +0000 (0:00:01.973) 0:00:34.335 ***** 2026-02-19 05:06:25.229457 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.229466 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.229474 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.229483 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.229491 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.229500 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.229508 | orchestrator | 2026-02-19 05:06:25.229517 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-19 05:06:25.229527 | orchestrator | Thursday 19 February 2026 05:06:20 +0000 (0:00:02.192) 0:00:36.527 ***** 2026-02-19 05:06:25.229535 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.229548 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.229557 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.229565 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.229574 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.229582 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.229591 | orchestrator | 2026-02-19 05:06:25.229599 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-19 05:06:25.229608 | orchestrator | Thursday 19 February 2026 05:06:22 +0000 (0:00:01.760) 0:00:38.288 ***** 2026-02-19 05:06:25.229624 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-19 05:06:25.229633 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-19 05:06:25.229641 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.229650 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-19 05:06:25.229658 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-19 05:06:25.229667 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.229675 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-19 05:06:25.229684 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-19 05:06:25.229692 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:06:25.229701 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-19 05:06:25.229709 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-19 05:06:25.229718 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:06:25.229727 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-19 05:06:25.229735 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-19 05:06:25.229743 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:06:25.229752 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-19 05:06:25.229760 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-19 05:06:25.229769 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:06:25.229777 | orchestrator | 2026-02-19 05:06:25.229786 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-19 05:06:25.229795 | orchestrator | Thursday 19 February 2026 05:06:24 +0000 (0:00:02.028) 0:00:40.317 ***** 2026-02-19 05:06:25.229803 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:06:25.229812 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:06:25.229826 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:08:11.571284 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:08:11.571354 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571360 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571365 | orchestrator | 2026-02-19 05:08:11.571371 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-19 05:08:11.571377 | orchestrator | Thursday 19 February 2026 05:06:26 +0000 (0:00:01.832) 0:00:42.149 ***** 2026-02-19 05:08:11.571381 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:08:11.571385 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:08:11.571389 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:08:11.571393 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:08:11.571397 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571401 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571405 | orchestrator | 2026-02-19 05:08:11.571409 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-19 05:08:11.571413 | orchestrator | 2026-02-19 05:08:11.571417 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-19 05:08:11.571422 | orchestrator | Thursday 19 February 2026 05:06:29 +0000 (0:00:02.651) 0:00:44.800 ***** 2026-02-19 05:08:11.571426 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571437 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571441 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571445 | orchestrator | 2026-02-19 05:08:11.571449 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-19 05:08:11.571454 | orchestrator | Thursday 19 February 2026 05:06:31 +0000 (0:00:01.831) 0:00:46.632 ***** 2026-02-19 05:08:11.571458 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571462 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571466 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571470 | orchestrator | 2026-02-19 05:08:11.571473 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-19 05:08:11.571477 | orchestrator | Thursday 19 February 2026 05:06:33 +0000 (0:00:02.099) 0:00:48.732 ***** 2026-02-19 05:08:11.571494 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:08:11.571498 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:08:11.571502 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:08:11.571506 | orchestrator | 2026-02-19 05:08:11.571510 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-19 05:08:11.571513 | orchestrator | Thursday 19 February 2026 05:06:35 +0000 (0:00:02.181) 0:00:50.914 ***** 2026-02-19 05:08:11.571517 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571521 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571524 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571528 | orchestrator | 2026-02-19 05:08:11.571532 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-19 05:08:11.571536 | orchestrator | Thursday 19 February 2026 05:06:37 +0000 (0:00:01.995) 0:00:52.910 ***** 2026-02-19 05:08:11.571539 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:08:11.571543 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571547 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571551 | orchestrator | 2026-02-19 05:08:11.571554 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-19 05:08:11.571558 | orchestrator | Thursday 19 February 2026 05:06:38 +0000 (0:00:01.425) 0:00:54.335 ***** 2026-02-19 05:08:11.571562 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571566 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571570 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571573 | orchestrator | 2026-02-19 05:08:11.571577 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-19 05:08:11.571581 | orchestrator | Thursday 19 February 2026 05:06:40 +0000 (0:00:01.727) 0:00:56.063 ***** 2026-02-19 05:08:11.571585 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571588 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571592 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571596 | orchestrator | 2026-02-19 05:08:11.571599 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-19 05:08:11.571603 | orchestrator | Thursday 19 February 2026 05:06:42 +0000 (0:00:02.194) 0:00:58.258 ***** 2026-02-19 05:08:11.571607 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:08:11.571611 | orchestrator | 2026-02-19 05:08:11.571614 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-19 05:08:11.571618 | orchestrator | Thursday 19 February 2026 05:06:44 +0000 (0:00:01.943) 0:01:00.202 ***** 2026-02-19 05:08:11.571622 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571625 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571629 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571633 | orchestrator | 2026-02-19 05:08:11.571636 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-19 05:08:11.571640 | orchestrator | Thursday 19 February 2026 05:06:47 +0000 (0:00:02.526) 0:01:02.729 ***** 2026-02-19 05:08:11.571644 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571648 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571651 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571655 | orchestrator | 2026-02-19 05:08:11.571659 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-19 05:08:11.571663 | orchestrator | Thursday 19 February 2026 05:06:48 +0000 (0:00:01.680) 0:01:04.409 ***** 2026-02-19 05:08:11.571667 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571670 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571674 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:08:11.571678 | orchestrator | 2026-02-19 05:08:11.571682 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-19 05:08:11.571685 | orchestrator | Thursday 19 February 2026 05:06:50 +0000 (0:00:01.917) 0:01:06.327 ***** 2026-02-19 05:08:11.571689 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571693 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571696 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:08:11.571703 | orchestrator | 2026-02-19 05:08:11.571707 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-19 05:08:11.571711 | orchestrator | Thursday 19 February 2026 05:06:53 +0000 (0:00:02.571) 0:01:08.899 ***** 2026-02-19 05:08:11.571714 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:08:11.571718 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571730 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571734 | orchestrator | 2026-02-19 05:08:11.571738 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-19 05:08:11.571742 | orchestrator | Thursday 19 February 2026 05:06:54 +0000 (0:00:01.403) 0:01:10.303 ***** 2026-02-19 05:08:11.571746 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:08:11.571749 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571753 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571757 | orchestrator | 2026-02-19 05:08:11.571761 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-19 05:08:11.571764 | orchestrator | Thursday 19 February 2026 05:06:56 +0000 (0:00:01.682) 0:01:11.986 ***** 2026-02-19 05:08:11.571768 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:08:11.571772 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:08:11.571776 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:08:11.571779 | orchestrator | 2026-02-19 05:08:11.571783 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-19 05:08:11.571787 | orchestrator | Thursday 19 February 2026 05:06:58 +0000 (0:00:02.151) 0:01:14.137 ***** 2026-02-19 05:08:11.571791 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571794 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571798 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571802 | orchestrator | 2026-02-19 05:08:11.571805 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-19 05:08:11.571809 | orchestrator | Thursday 19 February 2026 05:07:00 +0000 (0:00:01.978) 0:01:16.116 ***** 2026-02-19 05:08:11.571813 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571816 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571820 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571824 | orchestrator | 2026-02-19 05:08:11.571828 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-19 05:08:11.571832 | orchestrator | Thursday 19 February 2026 05:07:01 +0000 (0:00:01.394) 0:01:17.510 ***** 2026-02-19 05:08:11.571836 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-19 05:08:11.571840 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-19 05:08:11.571844 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-19 05:08:11.571848 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-19 05:08:11.571852 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-19 05:08:11.571856 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-19 05:08:11.571859 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-19 05:08:11.571863 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-19 05:08:11.571867 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-19 05:08:11.571871 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571877 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571881 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571885 | orchestrator | 2026-02-19 05:08:11.571888 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-19 05:08:11.571892 | orchestrator | Thursday 19 February 2026 05:07:35 +0000 (0:00:33.943) 0:01:51.454 ***** 2026-02-19 05:08:11.571934 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:08:11.571938 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:08:11.571942 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:08:11.571945 | orchestrator | 2026-02-19 05:08:11.571949 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-19 05:08:11.571953 | orchestrator | Thursday 19 February 2026 05:07:37 +0000 (0:00:01.592) 0:01:53.046 ***** 2026-02-19 05:08:11.571957 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:08:11.571960 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:08:11.571964 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:08:11.571968 | orchestrator | 2026-02-19 05:08:11.571972 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-19 05:08:11.571976 | orchestrator | Thursday 19 February 2026 05:07:40 +0000 (0:00:02.919) 0:01:55.966 ***** 2026-02-19 05:08:11.571979 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.571983 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.571987 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.571991 | orchestrator | 2026-02-19 05:08:11.571997 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-19 05:08:11.572001 | orchestrator | Thursday 19 February 2026 05:07:42 +0000 (0:00:02.390) 0:01:58.357 ***** 2026-02-19 05:08:11.572005 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:08:11.572009 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:08:11.572013 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:08:11.572016 | orchestrator | 2026-02-19 05:08:11.572020 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-19 05:08:11.572024 | orchestrator | Thursday 19 February 2026 05:08:09 +0000 (0:00:27.021) 0:02:25.378 ***** 2026-02-19 05:08:11.572028 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:08:11.572031 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:08:11.572035 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:08:11.572039 | orchestrator | 2026-02-19 05:08:11.572043 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-19 05:08:11.572050 | orchestrator | Thursday 19 February 2026 05:08:11 +0000 (0:00:01.733) 0:02:27.112 ***** 2026-02-19 05:09:03.750414 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:09:03.750562 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:09:03.750589 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:09:03.750609 | orchestrator | 2026-02-19 05:09:03.750631 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-19 05:09:03.750652 | orchestrator | Thursday 19 February 2026 05:08:13 +0000 (0:00:01.691) 0:02:28.803 ***** 2026-02-19 05:09:03.750671 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:09:03.750691 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:09:03.750710 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:09:03.750729 | orchestrator | 2026-02-19 05:09:03.750749 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-19 05:09:03.750768 | orchestrator | Thursday 19 February 2026 05:08:15 +0000 (0:00:01.979) 0:02:30.782 ***** 2026-02-19 05:09:03.750787 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:09:03.750806 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:09:03.750824 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:09:03.750843 | orchestrator | 2026-02-19 05:09:03.750863 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-19 05:09:03.750880 | orchestrator | Thursday 19 February 2026 05:08:17 +0000 (0:00:01.861) 0:02:32.644 ***** 2026-02-19 05:09:03.750900 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:09:03.750951 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:09:03.750972 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:09:03.751024 | orchestrator | 2026-02-19 05:09:03.751066 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-19 05:09:03.751118 | orchestrator | Thursday 19 February 2026 05:08:18 +0000 (0:00:01.344) 0:02:33.989 ***** 2026-02-19 05:09:03.751137 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:09:03.751157 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:09:03.751175 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:09:03.751195 | orchestrator | 2026-02-19 05:09:03.751215 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-19 05:09:03.751233 | orchestrator | Thursday 19 February 2026 05:08:20 +0000 (0:00:01.696) 0:02:35.685 ***** 2026-02-19 05:09:03.751251 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:09:03.751270 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:09:03.751289 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:09:03.751307 | orchestrator | 2026-02-19 05:09:03.751325 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-19 05:09:03.751344 | orchestrator | Thursday 19 February 2026 05:08:22 +0000 (0:00:02.014) 0:02:37.700 ***** 2026-02-19 05:09:03.751363 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:09:03.751381 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:09:03.751400 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:09:03.751420 | orchestrator | 2026-02-19 05:09:03.751438 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-19 05:09:03.751458 | orchestrator | Thursday 19 February 2026 05:08:24 +0000 (0:00:01.945) 0:02:39.645 ***** 2026-02-19 05:09:03.751477 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:09:03.751496 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:09:03.751515 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:09:03.751533 | orchestrator | 2026-02-19 05:09:03.751551 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-19 05:09:03.751569 | orchestrator | Thursday 19 February 2026 05:08:26 +0000 (0:00:01.940) 0:02:41.586 ***** 2026-02-19 05:09:03.751589 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:09:03.751608 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:09:03.751627 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:09:03.751646 | orchestrator | 2026-02-19 05:09:03.751665 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-19 05:09:03.751684 | orchestrator | Thursday 19 February 2026 05:08:27 +0000 (0:00:01.364) 0:02:42.951 ***** 2026-02-19 05:09:03.751704 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:09:03.751722 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:09:03.751741 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:09:03.751760 | orchestrator | 2026-02-19 05:09:03.751778 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-19 05:09:03.751797 | orchestrator | Thursday 19 February 2026 05:08:28 +0000 (0:00:01.408) 0:02:44.360 ***** 2026-02-19 05:09:03.751816 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:09:03.751835 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:09:03.751854 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:09:03.751873 | orchestrator | 2026-02-19 05:09:03.751892 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-19 05:09:03.751988 | orchestrator | Thursday 19 February 2026 05:08:30 +0000 (0:00:01.862) 0:02:46.222 ***** 2026-02-19 05:09:03.752014 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:09:03.752032 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:09:03.752052 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:09:03.752070 | orchestrator | 2026-02-19 05:09:03.752089 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-19 05:09:03.752110 | orchestrator | Thursday 19 February 2026 05:08:32 +0000 (0:00:01.681) 0:02:47.903 ***** 2026-02-19 05:09:03.752129 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-19 05:09:03.752149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-19 05:09:03.752185 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-19 05:09:03.752204 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-19 05:09:03.752223 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-19 05:09:03.752242 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-19 05:09:03.752262 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-19 05:09:03.752281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-19 05:09:03.752326 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-19 05:09:03.752347 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-19 05:09:03.752368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-19 05:09:03.752387 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-19 05:09:03.752405 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-19 05:09:03.752424 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-19 05:09:03.752442 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-19 05:09:03.752461 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-19 05:09:03.752479 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-19 05:09:03.752497 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-19 05:09:03.752516 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-19 05:09:03.752535 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-19 05:09:03.752555 | orchestrator | 2026-02-19 05:09:03.752572 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-19 05:09:03.752588 | orchestrator | 2026-02-19 05:09:03.752606 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-19 05:09:03.752622 | orchestrator | Thursday 19 February 2026 05:08:37 +0000 (0:00:04.822) 0:02:52.726 ***** 2026-02-19 05:09:03.752640 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:09:03.752658 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:09:03.752674 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:09:03.752691 | orchestrator | 2026-02-19 05:09:03.752708 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-19 05:09:03.752724 | orchestrator | Thursday 19 February 2026 05:08:38 +0000 (0:00:01.676) 0:02:54.403 ***** 2026-02-19 05:09:03.752740 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:09:03.752756 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:09:03.752773 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:09:03.752789 | orchestrator | 2026-02-19 05:09:03.752805 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-19 05:09:03.752821 | orchestrator | Thursday 19 February 2026 05:08:41 +0000 (0:00:02.686) 0:02:57.089 ***** 2026-02-19 05:09:03.752837 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:09:03.752853 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:09:03.752871 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:09:03.752887 | orchestrator | 2026-02-19 05:09:03.752904 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-19 05:09:03.752943 | orchestrator | Thursday 19 February 2026 05:08:43 +0000 (0:00:01.652) 0:02:58.742 ***** 2026-02-19 05:09:03.752961 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:09:03.752988 | orchestrator | 2026-02-19 05:09:03.753004 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-19 05:09:03.753018 | orchestrator | Thursday 19 February 2026 05:08:44 +0000 (0:00:01.742) 0:03:00.485 ***** 2026-02-19 05:09:03.753028 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:09:03.753038 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:09:03.753048 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:09:03.753058 | orchestrator | 2026-02-19 05:09:03.753068 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-19 05:09:03.753078 | orchestrator | Thursday 19 February 2026 05:08:46 +0000 (0:00:01.461) 0:03:01.947 ***** 2026-02-19 05:09:03.753087 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:09:03.753099 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:09:03.753115 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:09:03.753139 | orchestrator | 2026-02-19 05:09:03.753171 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-19 05:09:03.753187 | orchestrator | Thursday 19 February 2026 05:08:47 +0000 (0:00:01.436) 0:03:03.384 ***** 2026-02-19 05:09:03.753202 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:09:03.753218 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:09:03.753233 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:09:03.753250 | orchestrator | 2026-02-19 05:09:03.753267 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-19 05:09:03.753283 | orchestrator | Thursday 19 February 2026 05:08:49 +0000 (0:00:01.335) 0:03:04.720 ***** 2026-02-19 05:09:03.753300 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:09:03.753317 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:09:03.753334 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:09:03.753349 | orchestrator | 2026-02-19 05:09:03.753366 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-19 05:09:03.753382 | orchestrator | Thursday 19 February 2026 05:08:50 +0000 (0:00:01.777) 0:03:06.497 ***** 2026-02-19 05:09:03.753398 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:09:03.753414 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:09:03.753431 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:09:03.753447 | orchestrator | 2026-02-19 05:09:03.753463 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-19 05:09:03.753478 | orchestrator | Thursday 19 February 2026 05:08:53 +0000 (0:00:02.529) 0:03:09.027 ***** 2026-02-19 05:09:03.753494 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:09:03.753512 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:09:03.753529 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:09:03.753544 | orchestrator | 2026-02-19 05:09:03.753561 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-19 05:09:03.753577 | orchestrator | Thursday 19 February 2026 05:08:55 +0000 (0:00:02.369) 0:03:11.396 ***** 2026-02-19 05:09:03.753606 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:10:11.172411 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:10:11.172522 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:10:11.172537 | orchestrator | 2026-02-19 05:10:11.172551 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-19 05:10:11.172563 | orchestrator | 2026-02-19 05:10:11.172574 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-19 05:10:11.172586 | orchestrator | Thursday 19 February 2026 05:09:03 +0000 (0:00:07.896) 0:03:19.293 ***** 2026-02-19 05:10:11.172598 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.172610 | orchestrator | 2026-02-19 05:10:11.172621 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-19 05:10:11.172632 | orchestrator | Thursday 19 February 2026 05:09:05 +0000 (0:00:02.237) 0:03:21.530 ***** 2026-02-19 05:10:11.172643 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.172654 | orchestrator | 2026-02-19 05:10:11.172665 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-19 05:10:11.172676 | orchestrator | Thursday 19 February 2026 05:09:07 +0000 (0:00:01.486) 0:03:23.017 ***** 2026-02-19 05:10:11.172713 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-19 05:10:11.172724 | orchestrator | 2026-02-19 05:10:11.172735 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-19 05:10:11.172760 | orchestrator | Thursday 19 February 2026 05:09:09 +0000 (0:00:01.710) 0:03:24.728 ***** 2026-02-19 05:10:11.172772 | orchestrator | changed: [testbed-manager] 2026-02-19 05:10:11.172783 | orchestrator | 2026-02-19 05:10:11.172794 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-19 05:10:11.172805 | orchestrator | Thursday 19 February 2026 05:09:11 +0000 (0:00:01.945) 0:03:26.673 ***** 2026-02-19 05:10:11.172816 | orchestrator | changed: [testbed-manager] 2026-02-19 05:10:11.172827 | orchestrator | 2026-02-19 05:10:11.172838 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-19 05:10:11.172848 | orchestrator | Thursday 19 February 2026 05:09:12 +0000 (0:00:01.619) 0:03:28.292 ***** 2026-02-19 05:10:11.172859 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-19 05:10:11.172870 | orchestrator | 2026-02-19 05:10:11.172881 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-19 05:10:11.172892 | orchestrator | Thursday 19 February 2026 05:09:15 +0000 (0:00:03.053) 0:03:31.346 ***** 2026-02-19 05:10:11.172905 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-19 05:10:11.172917 | orchestrator | 2026-02-19 05:10:11.172930 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-19 05:10:11.172943 | orchestrator | Thursday 19 February 2026 05:09:17 +0000 (0:00:01.851) 0:03:33.197 ***** 2026-02-19 05:10:11.173008 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173021 | orchestrator | 2026-02-19 05:10:11.173033 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-19 05:10:11.173047 | orchestrator | Thursday 19 February 2026 05:09:19 +0000 (0:00:01.523) 0:03:34.721 ***** 2026-02-19 05:10:11.173059 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173072 | orchestrator | 2026-02-19 05:10:11.173084 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-19 05:10:11.173096 | orchestrator | 2026-02-19 05:10:11.173108 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-19 05:10:11.173120 | orchestrator | Thursday 19 February 2026 05:09:20 +0000 (0:00:01.647) 0:03:36.369 ***** 2026-02-19 05:10:11.173132 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173145 | orchestrator | 2026-02-19 05:10:11.173157 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-19 05:10:11.173169 | orchestrator | Thursday 19 February 2026 05:09:21 +0000 (0:00:01.117) 0:03:37.486 ***** 2026-02-19 05:10:11.173182 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 05:10:11.173195 | orchestrator | 2026-02-19 05:10:11.173205 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-19 05:10:11.173216 | orchestrator | Thursday 19 February 2026 05:09:23 +0000 (0:00:01.527) 0:03:39.014 ***** 2026-02-19 05:10:11.173227 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173237 | orchestrator | 2026-02-19 05:10:11.173248 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-19 05:10:11.173259 | orchestrator | Thursday 19 February 2026 05:09:25 +0000 (0:00:01.850) 0:03:40.865 ***** 2026-02-19 05:10:11.173269 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173280 | orchestrator | 2026-02-19 05:10:11.173291 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-19 05:10:11.173301 | orchestrator | Thursday 19 February 2026 05:09:28 +0000 (0:00:02.794) 0:03:43.660 ***** 2026-02-19 05:10:11.173312 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173327 | orchestrator | 2026-02-19 05:10:11.173346 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-19 05:10:11.173370 | orchestrator | Thursday 19 February 2026 05:09:29 +0000 (0:00:01.433) 0:03:45.093 ***** 2026-02-19 05:10:11.173414 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173433 | orchestrator | 2026-02-19 05:10:11.173452 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-19 05:10:11.173470 | orchestrator | Thursday 19 February 2026 05:09:31 +0000 (0:00:01.478) 0:03:46.572 ***** 2026-02-19 05:10:11.173486 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173503 | orchestrator | 2026-02-19 05:10:11.173519 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-19 05:10:11.173537 | orchestrator | Thursday 19 February 2026 05:09:32 +0000 (0:00:01.637) 0:03:48.209 ***** 2026-02-19 05:10:11.173556 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173575 | orchestrator | 2026-02-19 05:10:11.173594 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-19 05:10:11.173614 | orchestrator | Thursday 19 February 2026 05:09:35 +0000 (0:00:02.464) 0:03:50.674 ***** 2026-02-19 05:10:11.173632 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:11.173652 | orchestrator | 2026-02-19 05:10:11.173672 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-19 05:10:11.173685 | orchestrator | 2026-02-19 05:10:11.173696 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-19 05:10:11.173728 | orchestrator | Thursday 19 February 2026 05:09:36 +0000 (0:00:01.729) 0:03:52.403 ***** 2026-02-19 05:10:11.173739 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:10:11.173750 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:10:11.173760 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:10:11.173771 | orchestrator | 2026-02-19 05:10:11.173782 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-19 05:10:11.173793 | orchestrator | Thursday 19 February 2026 05:09:38 +0000 (0:00:01.424) 0:03:53.828 ***** 2026-02-19 05:10:11.173803 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:10:11.173814 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:10:11.173824 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:10:11.173835 | orchestrator | 2026-02-19 05:10:11.173846 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-19 05:10:11.173857 | orchestrator | Thursday 19 February 2026 05:09:39 +0000 (0:00:01.604) 0:03:55.433 ***** 2026-02-19 05:10:11.173868 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:10:11.173879 | orchestrator | 2026-02-19 05:10:11.173890 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-19 05:10:11.173901 | orchestrator | Thursday 19 February 2026 05:09:41 +0000 (0:00:01.867) 0:03:57.300 ***** 2026-02-19 05:10:11.173911 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.173922 | orchestrator | 2026-02-19 05:10:11.173933 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-19 05:10:11.173943 | orchestrator | Thursday 19 February 2026 05:09:43 +0000 (0:00:01.830) 0:03:59.131 ***** 2026-02-19 05:10:11.174098 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.174112 | orchestrator | 2026-02-19 05:10:11.174123 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-19 05:10:11.174134 | orchestrator | Thursday 19 February 2026 05:09:45 +0000 (0:00:01.849) 0:04:00.980 ***** 2026-02-19 05:10:11.174144 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:10:11.174155 | orchestrator | 2026-02-19 05:10:11.174166 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-19 05:10:11.174177 | orchestrator | Thursday 19 February 2026 05:09:46 +0000 (0:00:01.167) 0:04:02.148 ***** 2026-02-19 05:10:11.174187 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.174198 | orchestrator | 2026-02-19 05:10:11.174209 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-19 05:10:11.174220 | orchestrator | Thursday 19 February 2026 05:09:48 +0000 (0:00:02.079) 0:04:04.228 ***** 2026-02-19 05:10:11.174231 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.174241 | orchestrator | 2026-02-19 05:10:11.174263 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-19 05:10:11.174274 | orchestrator | Thursday 19 February 2026 05:09:50 +0000 (0:00:02.297) 0:04:06.525 ***** 2026-02-19 05:10:11.174285 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.174296 | orchestrator | 2026-02-19 05:10:11.174307 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-19 05:10:11.174317 | orchestrator | Thursday 19 February 2026 05:09:52 +0000 (0:00:01.179) 0:04:07.705 ***** 2026-02-19 05:10:11.174328 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.174338 | orchestrator | 2026-02-19 05:10:11.174349 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-19 05:10:11.174359 | orchestrator | Thursday 19 February 2026 05:09:53 +0000 (0:00:01.193) 0:04:08.898 ***** 2026-02-19 05:10:11.174370 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-19 05:10:11.174381 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-19 05:10:11.174393 | orchestrator | } 2026-02-19 05:10:11.174404 | orchestrator | 2026-02-19 05:10:11.174415 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-19 05:10:11.174425 | orchestrator | Thursday 19 February 2026 05:09:54 +0000 (0:00:01.124) 0:04:10.023 ***** 2026-02-19 05:10:11.174436 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:10:11.174447 | orchestrator | 2026-02-19 05:10:11.174457 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-19 05:10:11.174468 | orchestrator | Thursday 19 February 2026 05:09:55 +0000 (0:00:01.143) 0:04:11.167 ***** 2026-02-19 05:10:11.174478 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-19 05:10:11.174489 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-19 05:10:11.174500 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-19 05:10:11.174511 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-19 05:10:11.174521 | orchestrator | 2026-02-19 05:10:11.174532 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-19 05:10:11.174554 | orchestrator | Thursday 19 February 2026 05:10:00 +0000 (0:00:05.354) 0:04:16.521 ***** 2026-02-19 05:10:11.174565 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.174576 | orchestrator | 2026-02-19 05:10:11.174587 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-19 05:10:11.174597 | orchestrator | Thursday 19 February 2026 05:10:03 +0000 (0:00:02.337) 0:04:18.859 ***** 2026-02-19 05:10:11.174608 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.174619 | orchestrator | 2026-02-19 05:10:11.174630 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-19 05:10:11.174640 | orchestrator | Thursday 19 February 2026 05:10:05 +0000 (0:00:02.588) 0:04:21.447 ***** 2026-02-19 05:10:11.174651 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-19 05:10:11.174662 | orchestrator | 2026-02-19 05:10:11.174672 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-19 05:10:11.174683 | orchestrator | Thursday 19 February 2026 05:10:10 +0000 (0:00:04.133) 0:04:25.581 ***** 2026-02-19 05:10:11.174694 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:10:11.174705 | orchestrator | 2026-02-19 05:10:11.174725 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-19 05:10:40.445956 | orchestrator | Thursday 19 February 2026 05:10:11 +0000 (0:00:01.128) 0:04:26.710 ***** 2026-02-19 05:10:40.446193 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-19 05:10:40.446212 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-19 05:10:40.446225 | orchestrator | 2026-02-19 05:10:40.446238 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-19 05:10:40.446275 | orchestrator | Thursday 19 February 2026 05:10:14 +0000 (0:00:02.936) 0:04:29.646 ***** 2026-02-19 05:10:40.446291 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:10:40.446310 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:10:40.446334 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:10:40.446356 | orchestrator | 2026-02-19 05:10:40.446374 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-19 05:10:40.446394 | orchestrator | Thursday 19 February 2026 05:10:15 +0000 (0:00:01.648) 0:04:31.294 ***** 2026-02-19 05:10:40.446412 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:10:40.446430 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:10:40.446452 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:10:40.446476 | orchestrator | 2026-02-19 05:10:40.446513 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-19 05:10:40.446531 | orchestrator | 2026-02-19 05:10:40.446549 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-19 05:10:40.446567 | orchestrator | Thursday 19 February 2026 05:10:17 +0000 (0:00:02.062) 0:04:33.357 ***** 2026-02-19 05:10:40.446585 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:40.446604 | orchestrator | 2026-02-19 05:10:40.446621 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-19 05:10:40.446639 | orchestrator | Thursday 19 February 2026 05:10:18 +0000 (0:00:01.090) 0:04:34.447 ***** 2026-02-19 05:10:40.446651 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-19 05:10:40.446662 | orchestrator | 2026-02-19 05:10:40.446673 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-19 05:10:40.446683 | orchestrator | Thursday 19 February 2026 05:10:20 +0000 (0:00:01.473) 0:04:35.921 ***** 2026-02-19 05:10:40.446694 | orchestrator | ok: [testbed-manager] 2026-02-19 05:10:40.446704 | orchestrator | 2026-02-19 05:10:40.446715 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-19 05:10:40.446726 | orchestrator | 2026-02-19 05:10:40.446736 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-19 05:10:40.446747 | orchestrator | Thursday 19 February 2026 05:10:24 +0000 (0:00:04.397) 0:04:40.318 ***** 2026-02-19 05:10:40.446757 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:10:40.446768 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:10:40.446779 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:10:40.446789 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:10:40.446800 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:10:40.446810 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:10:40.446820 | orchestrator | 2026-02-19 05:10:40.446831 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-19 05:10:40.446842 | orchestrator | Thursday 19 February 2026 05:10:26 +0000 (0:00:01.860) 0:04:42.178 ***** 2026-02-19 05:10:40.446853 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-19 05:10:40.446864 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-19 05:10:40.446874 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-19 05:10:40.446884 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-19 05:10:40.446895 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-19 05:10:40.446905 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-19 05:10:40.446916 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-19 05:10:40.446926 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-19 05:10:40.446937 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-19 05:10:40.446947 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-19 05:10:40.446969 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-19 05:10:40.447012 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-19 05:10:40.447023 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-19 05:10:40.447034 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-19 05:10:40.447044 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-19 05:10:40.447055 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-19 05:10:40.447065 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-19 05:10:40.447076 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-19 05:10:40.447086 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-19 05:10:40.447097 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-19 05:10:40.447108 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-19 05:10:40.447141 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-19 05:10:40.447152 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-19 05:10:40.447163 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-19 05:10:40.447173 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-19 05:10:40.447184 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-19 05:10:40.447194 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-19 05:10:40.447205 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-19 05:10:40.447215 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-19 05:10:40.447226 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-19 05:10:40.447237 | orchestrator | 2026-02-19 05:10:40.447249 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-19 05:10:40.447259 | orchestrator | Thursday 19 February 2026 05:10:35 +0000 (0:00:08.482) 0:04:50.661 ***** 2026-02-19 05:10:40.447270 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:10:40.447281 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:10:40.447292 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:10:40.447303 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:10:40.447313 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:10:40.447324 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:10:40.447334 | orchestrator | 2026-02-19 05:10:40.447345 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-19 05:10:40.447356 | orchestrator | Thursday 19 February 2026 05:10:36 +0000 (0:00:01.853) 0:04:52.514 ***** 2026-02-19 05:10:40.447367 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:10:40.447377 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:10:40.447388 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:10:40.447398 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:10:40.447409 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:10:40.447419 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:10:40.447430 | orchestrator | 2026-02-19 05:10:40.447441 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:10:40.447452 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 05:10:40.447465 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-19 05:10:40.447505 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-19 05:10:40.447516 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-19 05:10:40.447527 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 05:10:40.447538 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 05:10:40.447549 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-19 05:10:40.447559 | orchestrator | 2026-02-19 05:10:40.447570 | orchestrator | 2026-02-19 05:10:40.447581 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:10:40.447592 | orchestrator | Thursday 19 February 2026 05:10:40 +0000 (0:00:03.454) 0:04:55.968 ***** 2026-02-19 05:10:40.447603 | orchestrator | =============================================================================== 2026-02-19 05:10:40.447613 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 33.94s 2026-02-19 05:10:40.447624 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.02s 2026-02-19 05:10:40.447635 | orchestrator | Manage labels ----------------------------------------------------------- 8.48s 2026-02-19 05:10:40.447645 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.90s 2026-02-19 05:10:40.447656 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.35s 2026-02-19 05:10:40.447666 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.82s 2026-02-19 05:10:40.447677 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.46s 2026-02-19 05:10:40.447688 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.40s 2026-02-19 05:10:40.447698 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.13s 2026-02-19 05:10:40.447709 | orchestrator | Manage taints ----------------------------------------------------------- 3.45s 2026-02-19 05:10:40.447720 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.05s 2026-02-19 05:10:40.447730 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.94s 2026-02-19 05:10:40.447748 | orchestrator | k3s_server : Kill the temporary service used for initialization --------- 2.92s 2026-02-19 05:10:40.879833 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.82s 2026-02-19 05:10:40.879910 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.82s 2026-02-19 05:10:40.879919 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.80s 2026-02-19 05:10:40.879926 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.78s 2026-02-19 05:10:40.879932 | orchestrator | k3s_agent : Check if system is PXE-booted ------------------------------- 2.69s 2026-02-19 05:10:40.879939 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.65s 2026-02-19 05:10:40.879946 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.59s 2026-02-19 05:10:41.176634 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-19 05:10:41.176780 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-19 05:10:41.184060 | orchestrator | + set -e 2026-02-19 05:10:41.184117 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 05:10:41.184129 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 05:10:41.184141 | orchestrator | ++ INTERACTIVE=false 2026-02-19 05:10:41.184176 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 05:10:41.184196 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 05:10:41.184207 | orchestrator | + osism apply openstackclient 2026-02-19 05:10:53.254302 | orchestrator | 2026-02-19 05:10:53 | INFO  | Task 62be1273-8ae5-4a3a-8cfc-5565735e21b5 (openstackclient) was prepared for execution. 2026-02-19 05:10:53.254466 | orchestrator | 2026-02-19 05:10:53 | INFO  | It takes a moment until task 62be1273-8ae5-4a3a-8cfc-5565735e21b5 (openstackclient) has been started and output is visible here. 2026-02-19 05:11:26.583741 | orchestrator | 2026-02-19 05:11:26.583819 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-19 05:11:26.583825 | orchestrator | 2026-02-19 05:11:26.583830 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-19 05:11:26.583835 | orchestrator | Thursday 19 February 2026 05:10:59 +0000 (0:00:01.811) 0:00:01.811 ***** 2026-02-19 05:11:26.583840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-19 05:11:26.583885 | orchestrator | 2026-02-19 05:11:26.583889 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-19 05:11:26.583894 | orchestrator | Thursday 19 February 2026 05:11:01 +0000 (0:00:01.767) 0:00:03.579 ***** 2026-02-19 05:11:26.583898 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-19 05:11:26.583903 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-19 05:11:26.583908 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-19 05:11:26.583912 | orchestrator | 2026-02-19 05:11:26.583916 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-19 05:11:26.583919 | orchestrator | Thursday 19 February 2026 05:11:03 +0000 (0:00:02.010) 0:00:05.589 ***** 2026-02-19 05:11:26.583924 | orchestrator | changed: [testbed-manager] 2026-02-19 05:11:26.583928 | orchestrator | 2026-02-19 05:11:26.583932 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-19 05:11:26.583935 | orchestrator | Thursday 19 February 2026 05:11:05 +0000 (0:00:02.074) 0:00:07.664 ***** 2026-02-19 05:11:26.583939 | orchestrator | ok: [testbed-manager] 2026-02-19 05:11:26.583944 | orchestrator | 2026-02-19 05:11:26.583948 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-19 05:11:26.583952 | orchestrator | Thursday 19 February 2026 05:11:07 +0000 (0:00:02.084) 0:00:09.749 ***** 2026-02-19 05:11:26.583956 | orchestrator | ok: [testbed-manager] 2026-02-19 05:11:26.583960 | orchestrator | 2026-02-19 05:11:26.583964 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-19 05:11:26.583968 | orchestrator | Thursday 19 February 2026 05:11:09 +0000 (0:00:02.017) 0:00:11.766 ***** 2026-02-19 05:11:26.583971 | orchestrator | ok: [testbed-manager] 2026-02-19 05:11:26.583975 | orchestrator | 2026-02-19 05:11:26.583979 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-19 05:11:26.583983 | orchestrator | Thursday 19 February 2026 05:11:10 +0000 (0:00:01.477) 0:00:13.244 ***** 2026-02-19 05:11:26.583986 | orchestrator | changed: [testbed-manager] 2026-02-19 05:11:26.583990 | orchestrator | 2026-02-19 05:11:26.583994 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-19 05:11:26.583998 | orchestrator | Thursday 19 February 2026 05:11:21 +0000 (0:00:10.338) 0:00:23.582 ***** 2026-02-19 05:11:26.584002 | orchestrator | changed: [testbed-manager] 2026-02-19 05:11:26.584005 | orchestrator | 2026-02-19 05:11:26.584047 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-19 05:11:26.584051 | orchestrator | Thursday 19 February 2026 05:11:22 +0000 (0:00:01.897) 0:00:25.480 ***** 2026-02-19 05:11:26.584054 | orchestrator | changed: [testbed-manager] 2026-02-19 05:11:26.584060 | orchestrator | 2026-02-19 05:11:26.584066 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-19 05:11:26.584072 | orchestrator | Thursday 19 February 2026 05:11:24 +0000 (0:00:01.549) 0:00:27.029 ***** 2026-02-19 05:11:26.584098 | orchestrator | ok: [testbed-manager] 2026-02-19 05:11:26.584102 | orchestrator | 2026-02-19 05:11:26.584106 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:11:26.584110 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-19 05:11:26.584115 | orchestrator | 2026-02-19 05:11:26.584119 | orchestrator | 2026-02-19 05:11:26.584123 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:11:26.584126 | orchestrator | Thursday 19 February 2026 05:11:26 +0000 (0:00:01.835) 0:00:28.865 ***** 2026-02-19 05:11:26.584130 | orchestrator | =============================================================================== 2026-02-19 05:11:26.584134 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.34s 2026-02-19 05:11:26.584138 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.08s 2026-02-19 05:11:26.584141 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.07s 2026-02-19 05:11:26.584145 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.02s 2026-02-19 05:11:26.584149 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.01s 2026-02-19 05:11:26.584152 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.90s 2026-02-19 05:11:26.584156 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.84s 2026-02-19 05:11:26.584160 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.77s 2026-02-19 05:11:26.584163 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.55s 2026-02-19 05:11:26.584167 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.48s 2026-02-19 05:11:26.874340 | orchestrator | + osism apply -a upgrade common 2026-02-19 05:11:28.869200 | orchestrator | 2026-02-19 05:11:28 | INFO  | Task 66cd1e0c-ea75-42e4-953d-509d8f855800 (common) was prepared for execution. 2026-02-19 05:11:28.869302 | orchestrator | 2026-02-19 05:11:28 | INFO  | It takes a moment until task 66cd1e0c-ea75-42e4-953d-509d8f855800 (common) has been started and output is visible here. 2026-02-19 05:11:46.924490 | orchestrator | 2026-02-19 05:11:46.924603 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-19 05:11:46.924623 | orchestrator | 2026-02-19 05:11:46.924637 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-19 05:11:46.924651 | orchestrator | Thursday 19 February 2026 05:11:34 +0000 (0:00:02.112) 0:00:02.112 ***** 2026-02-19 05:11:46.924664 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:11:46.924676 | orchestrator | 2026-02-19 05:11:46.924684 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-19 05:11:46.924692 | orchestrator | Thursday 19 February 2026 05:11:38 +0000 (0:00:03.555) 0:00:05.668 ***** 2026-02-19 05:11:46.924701 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:11:46.924709 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:11:46.924716 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:11:46.924723 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:11:46.924731 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:11:46.924738 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:11:46.924745 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:11:46.924752 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:11:46.924778 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:11:46.924786 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:11:46.924793 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:11:46.924800 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:11:46.924807 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:11:46.924815 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:11:46.924822 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:11:46.924829 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:11:46.924836 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:11:46.924843 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:11:46.924850 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:11:46.924857 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:11:46.924864 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:11:46.924871 | orchestrator | 2026-02-19 05:11:46.924878 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-19 05:11:46.924885 | orchestrator | Thursday 19 February 2026 05:11:41 +0000 (0:00:03.401) 0:00:09.070 ***** 2026-02-19 05:11:46.924893 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:11:46.924902 | orchestrator | 2026-02-19 05:11:46.924909 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-19 05:11:46.924916 | orchestrator | Thursday 19 February 2026 05:11:44 +0000 (0:00:02.626) 0:00:11.696 ***** 2026-02-19 05:11:46.924928 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:11:46.924950 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:11:46.924982 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:11:46.924990 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:11:46.925005 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:46.925014 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:11:46.925245 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:11:46.925262 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:11:46.925279 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:46.925299 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.164756 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.164948 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.164972 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.164984 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.164999 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.165020 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.165147 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.165202 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.165241 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.165270 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.165291 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:11:49.165311 | orchestrator | 2026-02-19 05:11:49.165332 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-19 05:11:49.165353 | orchestrator | Thursday 19 February 2026 05:11:48 +0000 (0:00:04.245) 0:00:15.942 ***** 2026-02-19 05:11:49.165376 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:49.165399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:49.165419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:49.165454 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371353 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:51.371503 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:11:51.371515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:51.371583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371625 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:11:51.371655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:51.371676 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:11:51.371687 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:11:51.371758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:51.371773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371802 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:11:51.371812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:51.371848 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:11:51.371859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:51.371881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164726 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:11:54.164733 | orchestrator | 2026-02-19 05:11:54.164750 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-19 05:11:54.164755 | orchestrator | Thursday 19 February 2026 05:11:51 +0000 (0:00:02.730) 0:00:18.672 ***** 2026-02-19 05:11:54.164760 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:54.164768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:54.164773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:54.164777 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164824 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:54.164832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:54.164848 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:11:54.164852 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:11:54.164856 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:11:54.164860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:11:54.164868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:11:54.164881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:05.680920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:05.681033 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:12:05.681098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:05.681107 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:12:05.681113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:05.681121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:05.681146 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:12:05.681153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:12:05.681160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:05.681166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:05.681172 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:12:05.681178 | orchestrator | 2026-02-19 05:12:05.681185 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-19 05:12:05.681192 | orchestrator | Thursday 19 February 2026 05:11:54 +0000 (0:00:02.802) 0:00:21.475 ***** 2026-02-19 05:12:05.681198 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:12:05.681216 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:12:05.681223 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:12:05.681229 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:12:05.681234 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:12:05.681240 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:12:05.681246 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:12:05.681252 | orchestrator | 2026-02-19 05:12:05.681258 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-19 05:12:05.681263 | orchestrator | Thursday 19 February 2026 05:11:56 +0000 (0:00:01.903) 0:00:23.378 ***** 2026-02-19 05:12:05.681269 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:12:05.681275 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:12:05.681286 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:12:05.681292 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:12:05.681298 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:12:05.681304 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:12:05.681309 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:12:05.681315 | orchestrator | 2026-02-19 05:12:05.681321 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-19 05:12:05.681327 | orchestrator | Thursday 19 February 2026 05:11:57 +0000 (0:00:01.850) 0:00:25.229 ***** 2026-02-19 05:12:05.681332 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:12:05.681338 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:12:05.681344 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:12:05.681349 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:12:05.681360 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:12:05.681366 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:12:05.681372 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:12:05.681377 | orchestrator | 2026-02-19 05:12:05.681383 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-19 05:12:05.681389 | orchestrator | Thursday 19 February 2026 05:11:59 +0000 (0:00:01.940) 0:00:27.169 ***** 2026-02-19 05:12:05.681395 | orchestrator | changed: [testbed-manager] 2026-02-19 05:12:05.681400 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:12:05.681406 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:12:05.681412 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:12:05.681417 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:12:05.681423 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:12:05.681429 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:12:05.681434 | orchestrator | 2026-02-19 05:12:05.681440 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-19 05:12:05.681446 | orchestrator | Thursday 19 February 2026 05:12:02 +0000 (0:00:02.757) 0:00:29.927 ***** 2026-02-19 05:12:05.681452 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:05.681460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:05.681467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:05.681474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:05.681487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:07.930332 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:07.930478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:07.930491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930527 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930682 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:07.930739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:27.605007 | orchestrator | 2026-02-19 05:12:27.605125 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-19 05:12:27.605136 | orchestrator | Thursday 19 February 2026 05:12:07 +0000 (0:00:05.301) 0:00:35.229 ***** 2026-02-19 05:12:27.605142 | orchestrator | [WARNING]: Skipped 2026-02-19 05:12:27.605149 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-19 05:12:27.605155 | orchestrator | to this access issue: 2026-02-19 05:12:27.605161 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-19 05:12:27.605166 | orchestrator | directory 2026-02-19 05:12:27.605171 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 05:12:27.605178 | orchestrator | 2026-02-19 05:12:27.605183 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-19 05:12:27.605188 | orchestrator | Thursday 19 February 2026 05:12:10 +0000 (0:00:02.346) 0:00:37.575 ***** 2026-02-19 05:12:27.605193 | orchestrator | [WARNING]: Skipped 2026-02-19 05:12:27.605198 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-19 05:12:27.605203 | orchestrator | to this access issue: 2026-02-19 05:12:27.605208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-19 05:12:27.605213 | orchestrator | directory 2026-02-19 05:12:27.605218 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 05:12:27.605223 | orchestrator | 2026-02-19 05:12:27.605228 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-19 05:12:27.605233 | orchestrator | Thursday 19 February 2026 05:12:12 +0000 (0:00:01.864) 0:00:39.440 ***** 2026-02-19 05:12:27.605238 | orchestrator | [WARNING]: Skipped 2026-02-19 05:12:27.605243 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-19 05:12:27.605248 | orchestrator | to this access issue: 2026-02-19 05:12:27.605253 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-19 05:12:27.605259 | orchestrator | directory 2026-02-19 05:12:27.605264 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 05:12:27.605269 | orchestrator | 2026-02-19 05:12:27.605274 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-19 05:12:27.605279 | orchestrator | Thursday 19 February 2026 05:12:13 +0000 (0:00:01.774) 0:00:41.215 ***** 2026-02-19 05:12:27.605284 | orchestrator | [WARNING]: Skipped 2026-02-19 05:12:27.605289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-19 05:12:27.605294 | orchestrator | to this access issue: 2026-02-19 05:12:27.605299 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-19 05:12:27.605304 | orchestrator | directory 2026-02-19 05:12:27.605309 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 05:12:27.605314 | orchestrator | 2026-02-19 05:12:27.605319 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-19 05:12:27.605324 | orchestrator | Thursday 19 February 2026 05:12:15 +0000 (0:00:01.825) 0:00:43.041 ***** 2026-02-19 05:12:27.605329 | orchestrator | changed: [testbed-manager] 2026-02-19 05:12:27.605334 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:12:27.605339 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:12:27.605363 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:12:27.605368 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:12:27.605373 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:12:27.605378 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:12:27.605383 | orchestrator | 2026-02-19 05:12:27.605388 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-19 05:12:27.605393 | orchestrator | Thursday 19 February 2026 05:12:19 +0000 (0:00:03.886) 0:00:46.928 ***** 2026-02-19 05:12:27.605398 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:12:27.605404 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:12:27.605409 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:12:27.605414 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:12:27.605419 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:12:27.605424 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:12:27.605429 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:12:27.605434 | orchestrator | 2026-02-19 05:12:27.605439 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-19 05:12:27.605444 | orchestrator | Thursday 19 February 2026 05:12:22 +0000 (0:00:03.264) 0:00:50.193 ***** 2026-02-19 05:12:27.605449 | orchestrator | ok: [testbed-manager] 2026-02-19 05:12:27.605454 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:12:27.605459 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:12:27.605464 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:12:27.605469 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:12:27.605474 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:12:27.605479 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:12:27.605484 | orchestrator | 2026-02-19 05:12:27.605489 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-19 05:12:27.605494 | orchestrator | Thursday 19 February 2026 05:12:25 +0000 (0:00:02.942) 0:00:53.136 ***** 2026-02-19 05:12:27.605532 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:27.605550 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:27.605560 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:27.605575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:27.605586 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:27.605596 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:27.605606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:27.605625 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:34.760469 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:34.760605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:34.760625 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:34.760662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:34.760674 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:34.760686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:34.760697 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:34.760727 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:34.760740 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:34.760752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:34.760770 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:34.760782 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:34.760793 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:34.760805 | orchestrator | 2026-02-19 05:12:34.760818 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-19 05:12:34.760831 | orchestrator | Thursday 19 February 2026 05:12:28 +0000 (0:00:02.883) 0:00:56.020 ***** 2026-02-19 05:12:34.760842 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:12:34.760853 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:12:34.760864 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:12:34.760875 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:12:34.760885 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:12:34.760896 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:12:34.760906 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:12:34.760917 | orchestrator | 2026-02-19 05:12:34.760945 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-19 05:12:34.760957 | orchestrator | Thursday 19 February 2026 05:12:31 +0000 (0:00:03.016) 0:00:59.036 ***** 2026-02-19 05:12:34.760968 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:12:34.760978 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:12:34.760989 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:12:34.761000 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:12:34.761015 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:12:34.761034 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:12:37.195765 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:12:37.195844 | orchestrator | 2026-02-19 05:12:37.195853 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-19 05:12:37.195874 | orchestrator | Thursday 19 February 2026 05:12:34 +0000 (0:00:03.034) 0:01:02.070 ***** 2026-02-19 05:12:37.195881 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:37.195888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:37.195893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:37.195897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:37.195902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:37.195907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:37.195924 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:37.195944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:12:37.195949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:37.195954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:37.195958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:37.195964 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:37.195970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:37.195980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:41.634768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:41.634924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:41.634946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:41.634957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:41.634966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:41.634975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:41.634984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:12:41.634994 | orchestrator | 2026-02-19 05:12:41.635004 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-19 05:12:41.635014 | orchestrator | Thursday 19 February 2026 05:12:39 +0000 (0:00:04.478) 0:01:06.548 ***** 2026-02-19 05:12:41.635025 | orchestrator | changed: [testbed-manager] => { 2026-02-19 05:12:41.635054 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:12:41.635111 | orchestrator | } 2026-02-19 05:12:41.635121 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:12:41.635129 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:12:41.635138 | orchestrator | } 2026-02-19 05:12:41.635146 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:12:41.635155 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:12:41.635163 | orchestrator | } 2026-02-19 05:12:41.635172 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:12:41.635180 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:12:41.635189 | orchestrator | } 2026-02-19 05:12:41.635197 | orchestrator | changed: [testbed-node-3] => { 2026-02-19 05:12:41.635206 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:12:41.635215 | orchestrator | } 2026-02-19 05:12:41.635225 | orchestrator | changed: [testbed-node-4] => { 2026-02-19 05:12:41.635235 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:12:41.635244 | orchestrator | } 2026-02-19 05:12:41.635291 | orchestrator | changed: [testbed-node-5] => { 2026-02-19 05:12:41.635303 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:12:41.635313 | orchestrator | } 2026-02-19 05:12:41.635323 | orchestrator | 2026-02-19 05:12:41.635352 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:12:41.635362 | orchestrator | Thursday 19 February 2026 05:12:41 +0000 (0:00:01.934) 0:01:08.483 ***** 2026-02-19 05:12:41.635373 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:12:41.635385 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:41.635396 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:41.635407 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:12:41.635417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:12:41.635428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:41.635446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:41.635456 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:12:41.635466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:12:41.635485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.690814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.690924 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:12:47.690942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:12:47.690958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.690970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.691003 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:12:47.691031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:12:47.691044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.691061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.691128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:12:47.691141 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:12:47.691152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.691164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.691175 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:12:47.691187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:12:47.691206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.691217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:12:47.691229 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:12:47.691240 | orchestrator | 2026-02-19 05:12:47.691252 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:12:47.691265 | orchestrator | Thursday 19 February 2026 05:12:43 +0000 (0:00:02.806) 0:01:11.290 ***** 2026-02-19 05:12:47.691276 | orchestrator | 2026-02-19 05:12:47.691288 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:12:47.691299 | orchestrator | Thursday 19 February 2026 05:12:44 +0000 (0:00:00.462) 0:01:11.752 ***** 2026-02-19 05:12:47.691310 | orchestrator | 2026-02-19 05:12:47.691320 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:12:47.691331 | orchestrator | Thursday 19 February 2026 05:12:44 +0000 (0:00:00.436) 0:01:12.189 ***** 2026-02-19 05:12:47.691342 | orchestrator | 2026-02-19 05:12:47.691359 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:12:47.691371 | orchestrator | Thursday 19 February 2026 05:12:45 +0000 (0:00:00.413) 0:01:12.603 ***** 2026-02-19 05:12:47.691383 | orchestrator | 2026-02-19 05:12:47.691395 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:12:47.691407 | orchestrator | Thursday 19 February 2026 05:12:45 +0000 (0:00:00.438) 0:01:13.041 ***** 2026-02-19 05:12:47.691419 | orchestrator | 2026-02-19 05:12:47.691431 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:12:47.691443 | orchestrator | Thursday 19 February 2026 05:12:46 +0000 (0:00:00.664) 0:01:13.706 ***** 2026-02-19 05:12:47.691455 | orchestrator | 2026-02-19 05:12:47.691467 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:12:47.691479 | orchestrator | Thursday 19 February 2026 05:12:46 +0000 (0:00:00.449) 0:01:14.155 ***** 2026-02-19 05:12:47.691491 | orchestrator | 2026-02-19 05:12:47.691510 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-19 05:12:50.335649 | orchestrator | Thursday 19 February 2026 05:12:47 +0000 (0:00:00.816) 0:01:14.972 ***** 2026-02-19 05:12:50.335756 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_uhl6itxn/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_uhl6itxn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_uhl6itxn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-19 05:12:50.335915 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_uzii4wnm/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_uzii4wnm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_uzii4wnm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-19 05:12:50.335944 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_hcr2bstp/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_hcr2bstp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_hcr2bstp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-19 05:12:50.335983 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_o04aueup/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_o04aueup/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_o04aueup/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-19 05:12:53.604785 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ock0lca8/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ock0lca8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ock0lca8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-19 05:12:53.605864 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1_8cpl5s/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1_8cpl5s/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1_8cpl5s/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-19 05:12:53.605958 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3x60sb25/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3x60sb25/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3x60sb25/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-19 05:12:53.605987 | orchestrator | 2026-02-19 05:12:53.606008 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:12:53.606120 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-19 05:12:53.606134 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-19 05:12:53.606155 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-19 05:12:53.606167 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-19 05:12:53.606194 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-19 05:12:53.606206 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-19 05:12:53.606217 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-19 05:12:53.606233 | orchestrator | 2026-02-19 05:12:53.606253 | orchestrator | 2026-02-19 05:12:53.606289 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:12:53.867322 | orchestrator | 2026-02-19 05:12:53 | INFO  | Task 8a66e459-3f02-45ce-a2fa-effd12412da7 (common) was prepared for execution. 2026-02-19 05:12:53.867423 | orchestrator | 2026-02-19 05:12:53 | INFO  | It takes a moment until task 8a66e459-3f02-45ce-a2fa-effd12412da7 (common) has been started and output is visible here. 2026-02-19 05:13:10.587215 | orchestrator | Thursday 19 February 2026 05:12:53 +0000 (0:00:05.938) 0:01:20.911 ***** 2026-02-19 05:13:10.587293 | orchestrator | =============================================================================== 2026-02-19 05:13:10.587299 | orchestrator | common : Restart fluentd container -------------------------------------- 5.94s 2026-02-19 05:13:10.587304 | orchestrator | common : Copying over config.json files for services -------------------- 5.30s 2026-02-19 05:13:10.587308 | orchestrator | service-check-containers : common | Check containers -------------------- 4.48s 2026-02-19 05:13:10.587313 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.25s 2026-02-19 05:13:10.587317 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.89s 2026-02-19 05:13:10.587320 | orchestrator | common : Flush handlers ------------------------------------------------- 3.68s 2026-02-19 05:13:10.587324 | orchestrator | common : include_tasks -------------------------------------------------- 3.56s 2026-02-19 05:13:10.587328 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.40s 2026-02-19 05:13:10.587332 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.26s 2026-02-19 05:13:10.587336 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.03s 2026-02-19 05:13:10.587340 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.02s 2026-02-19 05:13:10.587343 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.94s 2026-02-19 05:13:10.587347 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.88s 2026-02-19 05:13:10.587351 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.81s 2026-02-19 05:13:10.587355 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.80s 2026-02-19 05:13:10.587358 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.76s 2026-02-19 05:13:10.587362 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.73s 2026-02-19 05:13:10.587367 | orchestrator | common : include_tasks -------------------------------------------------- 2.63s 2026-02-19 05:13:10.587371 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.35s 2026-02-19 05:13:10.587374 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.94s 2026-02-19 05:13:10.587378 | orchestrator | 2026-02-19 05:13:10.587385 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-19 05:13:10.587391 | orchestrator | 2026-02-19 05:13:10.587398 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-19 05:13:10.587405 | orchestrator | Thursday 19 February 2026 05:12:59 +0000 (0:00:01.929) 0:00:01.929 ***** 2026-02-19 05:13:10.587449 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:13:10.587459 | orchestrator | 2026-02-19 05:13:10.587466 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-19 05:13:10.587472 | orchestrator | Thursday 19 February 2026 05:13:02 +0000 (0:00:02.911) 0:00:04.840 ***** 2026-02-19 05:13:10.587479 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:13:10.587486 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:13:10.587492 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:13:10.587498 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:13:10.587505 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:13:10.587510 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:13:10.587516 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:13:10.587523 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-19 05:13:10.587529 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:13:10.587535 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:13:10.587541 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:13:10.587548 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:13:10.587556 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:13:10.587563 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:13:10.587570 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-19 05:13:10.587574 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:13:10.587578 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:13:10.587581 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:13:10.587585 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:13:10.587589 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:13:10.587605 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-19 05:13:10.587609 | orchestrator | 2026-02-19 05:13:10.587613 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-19 05:13:10.587617 | orchestrator | Thursday 19 February 2026 05:13:05 +0000 (0:00:03.113) 0:00:07.954 ***** 2026-02-19 05:13:10.587621 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:13:10.587626 | orchestrator | 2026-02-19 05:13:10.587630 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-19 05:13:10.587633 | orchestrator | Thursday 19 February 2026 05:13:07 +0000 (0:00:02.830) 0:00:10.784 ***** 2026-02-19 05:13:10.587639 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:10.587652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:10.587660 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:10.587664 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:10.587668 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:10.587672 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:10.587684 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:12.992984 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993156 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993214 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993227 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993279 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993294 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993326 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993339 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993359 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993372 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993401 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993412 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993423 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:12.993435 | orchestrator | 2026-02-19 05:13:12.993448 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-19 05:13:12.993460 | orchestrator | Thursday 19 February 2026 05:13:12 +0000 (0:00:04.503) 0:00:15.287 ***** 2026-02-19 05:13:12.993473 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:12.993494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:15.240443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240538 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:15.240590 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:13:15.240598 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:15.240633 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:13:15.240694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240709 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:13:15.240720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:15.240732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:15.240792 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:13:15.240802 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:13:15.240812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:15.240840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453577 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:13:16.453611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:16.453653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453679 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:13:16.453691 | orchestrator | 2026-02-19 05:13:16.453704 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-19 05:13:16.453716 | orchestrator | Thursday 19 February 2026 05:13:15 +0000 (0:00:02.741) 0:00:18.028 ***** 2026-02-19 05:13:16.453728 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:16.453763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:16.453775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:16.453820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453870 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:13:16.453889 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:16.453941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:16.453974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:28.967351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:28.967508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:28.967541 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:13:28.967565 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:13:28.967586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:28.967606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:28.967643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:28.967656 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:13:28.967668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:28.967679 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:13:28.967690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:28.967701 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:13:28.967734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:13:28.967753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:28.967765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:28.967776 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:13:28.967787 | orchestrator | 2026-02-19 05:13:28.967799 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-19 05:13:28.967811 | orchestrator | Thursday 19 February 2026 05:13:18 +0000 (0:00:03.113) 0:00:21.142 ***** 2026-02-19 05:13:28.967830 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:13:28.967841 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:13:28.967852 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:13:28.967862 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:13:28.967873 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:13:28.967883 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:13:28.967895 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:13:28.967908 | orchestrator | 2026-02-19 05:13:28.967920 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-19 05:13:28.967932 | orchestrator | Thursday 19 February 2026 05:13:20 +0000 (0:00:01.836) 0:00:22.979 ***** 2026-02-19 05:13:28.967944 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:13:28.967956 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:13:28.967969 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:13:28.967981 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:13:28.967993 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:13:28.968004 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:13:28.968017 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:13:28.968029 | orchestrator | 2026-02-19 05:13:28.968041 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-19 05:13:28.968052 | orchestrator | Thursday 19 February 2026 05:13:21 +0000 (0:00:01.800) 0:00:24.780 ***** 2026-02-19 05:13:28.968062 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:13:28.968073 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:13:28.968084 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:13:28.968122 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:13:28.968133 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:13:28.968143 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:13:28.968154 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:13:28.968165 | orchestrator | 2026-02-19 05:13:28.968176 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-19 05:13:28.968186 | orchestrator | Thursday 19 February 2026 05:13:23 +0000 (0:00:01.847) 0:00:26.627 ***** 2026-02-19 05:13:28.968197 | orchestrator | ok: [testbed-manager] 2026-02-19 05:13:28.968209 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:13:28.968219 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:13:28.968230 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:13:28.968241 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:13:28.968251 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:13:28.968262 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:13:28.968273 | orchestrator | 2026-02-19 05:13:28.968283 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-19 05:13:28.968294 | orchestrator | Thursday 19 February 2026 05:13:26 +0000 (0:00:02.812) 0:00:29.440 ***** 2026-02-19 05:13:28.968306 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:28.968336 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:30.927534 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:30.927652 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:30.927662 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:30.927670 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927678 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:30.927685 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:30.927692 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927721 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927738 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927746 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927753 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927760 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927767 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927774 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:30.927791 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.089946 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.090235 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.090267 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.090352 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.090367 | orchestrator | 2026-02-19 05:13:49.090381 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-19 05:13:49.090394 | orchestrator | Thursday 19 February 2026 05:13:30 +0000 (0:00:04.285) 0:00:33.725 ***** 2026-02-19 05:13:49.090405 | orchestrator | [WARNING]: Skipped 2026-02-19 05:13:49.090419 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-19 05:13:49.090432 | orchestrator | to this access issue: 2026-02-19 05:13:49.090444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-19 05:13:49.090455 | orchestrator | directory 2026-02-19 05:13:49.090469 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 05:13:49.090483 | orchestrator | 2026-02-19 05:13:49.090496 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-19 05:13:49.090508 | orchestrator | Thursday 19 February 2026 05:13:33 +0000 (0:00:02.174) 0:00:35.900 ***** 2026-02-19 05:13:49.090520 | orchestrator | [WARNING]: Skipped 2026-02-19 05:13:49.090533 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-19 05:13:49.090546 | orchestrator | to this access issue: 2026-02-19 05:13:49.090559 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-19 05:13:49.090572 | orchestrator | directory 2026-02-19 05:13:49.090584 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 05:13:49.090596 | orchestrator | 2026-02-19 05:13:49.090609 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-19 05:13:49.090622 | orchestrator | Thursday 19 February 2026 05:13:34 +0000 (0:00:01.728) 0:00:37.629 ***** 2026-02-19 05:13:49.090655 | orchestrator | [WARNING]: Skipped 2026-02-19 05:13:49.090668 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-19 05:13:49.090680 | orchestrator | to this access issue: 2026-02-19 05:13:49.090694 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-19 05:13:49.090714 | orchestrator | directory 2026-02-19 05:13:49.090733 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 05:13:49.090752 | orchestrator | 2026-02-19 05:13:49.090769 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-19 05:13:49.090788 | orchestrator | Thursday 19 February 2026 05:13:36 +0000 (0:00:01.855) 0:00:39.485 ***** 2026-02-19 05:13:49.090808 | orchestrator | [WARNING]: Skipped 2026-02-19 05:13:49.090826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-19 05:13:49.090845 | orchestrator | to this access issue: 2026-02-19 05:13:49.090864 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-19 05:13:49.090882 | orchestrator | directory 2026-02-19 05:13:49.090901 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-19 05:13:49.090914 | orchestrator | 2026-02-19 05:13:49.090925 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-19 05:13:49.090935 | orchestrator | Thursday 19 February 2026 05:13:38 +0000 (0:00:01.865) 0:00:41.350 ***** 2026-02-19 05:13:49.090946 | orchestrator | ok: [testbed-manager] 2026-02-19 05:13:49.090957 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:13:49.090968 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:13:49.090978 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:13:49.090989 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:13:49.090999 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:13:49.091009 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:13:49.091020 | orchestrator | 2026-02-19 05:13:49.091054 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-19 05:13:49.091066 | orchestrator | Thursday 19 February 2026 05:13:42 +0000 (0:00:03.632) 0:00:44.982 ***** 2026-02-19 05:13:49.091077 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:13:49.091089 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:13:49.091135 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:13:49.091149 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:13:49.091160 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:13:49.091171 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:13:49.091181 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-19 05:13:49.091192 | orchestrator | 2026-02-19 05:13:49.091203 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-19 05:13:49.091214 | orchestrator | Thursday 19 February 2026 05:13:45 +0000 (0:00:03.173) 0:00:48.157 ***** 2026-02-19 05:13:49.091225 | orchestrator | ok: [testbed-manager] 2026-02-19 05:13:49.091236 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:13:49.091247 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:13:49.091258 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:13:49.091268 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:13:49.091279 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:13:49.091290 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:13:49.091300 | orchestrator | 2026-02-19 05:13:49.091311 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-19 05:13:49.091322 | orchestrator | Thursday 19 February 2026 05:13:48 +0000 (0:00:02.839) 0:00:50.996 ***** 2026-02-19 05:13:49.091345 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:49.091359 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:49.091372 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:49.091383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:49.091403 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.951532 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:49.951637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:49.951674 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.951688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:49.951700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:49.951712 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:49.951724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:49.951782 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:49.951805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:49.951848 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.951870 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:49.951889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:13:49.951908 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.951928 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.951949 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:49.951982 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:59.360497 | orchestrator | 2026-02-19 05:13:59.360631 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-19 05:13:59.360650 | orchestrator | Thursday 19 February 2026 05:13:51 +0000 (0:00:02.841) 0:00:53.838 ***** 2026-02-19 05:13:59.360662 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:13:59.360673 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:13:59.360709 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:13:59.360721 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:13:59.360731 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:13:59.360742 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:13:59.360752 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-19 05:13:59.360763 | orchestrator | 2026-02-19 05:13:59.360774 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-19 05:13:59.360785 | orchestrator | Thursday 19 February 2026 05:13:53 +0000 (0:00:02.916) 0:00:56.754 ***** 2026-02-19 05:13:59.360795 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:13:59.360806 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:13:59.360817 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:13:59.360827 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:13:59.360838 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:13:59.360848 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:13:59.360859 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-19 05:13:59.360870 | orchestrator | 2026-02-19 05:13:59.360880 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-19 05:13:59.360891 | orchestrator | Thursday 19 February 2026 05:13:57 +0000 (0:00:03.092) 0:00:59.847 ***** 2026-02-19 05:13:59.360905 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:59.360920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:59.360931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:59.360943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:59.360986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:59.360999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:59.361011 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:59.361022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-19 05:13:59.361035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:59.361048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:59.361061 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:13:59.361095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:14:03.667413 | orchestrator | 2026-02-19 05:14:03.667421 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-19 05:14:03.667429 | orchestrator | Thursday 19 February 2026 05:14:01 +0000 (0:00:04.248) 0:01:04.096 ***** 2026-02-19 05:14:03.667439 | orchestrator | changed: [testbed-manager] => { 2026-02-19 05:14:03.667451 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:14:03.667462 | orchestrator | } 2026-02-19 05:14:03.667474 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:14:03.667486 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:14:03.667496 | orchestrator | } 2026-02-19 05:14:03.667507 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:14:03.667518 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:14:03.667529 | orchestrator | } 2026-02-19 05:14:03.667540 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:14:03.667550 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:14:03.667562 | orchestrator | } 2026-02-19 05:14:03.667573 | orchestrator | changed: [testbed-node-3] => { 2026-02-19 05:14:03.667584 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:14:03.667595 | orchestrator | } 2026-02-19 05:14:03.667606 | orchestrator | changed: [testbed-node-4] => { 2026-02-19 05:14:03.667618 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:14:03.667629 | orchestrator | } 2026-02-19 05:14:03.667640 | orchestrator | changed: [testbed-node-5] => { 2026-02-19 05:14:03.667652 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:14:03.667663 | orchestrator | } 2026-02-19 05:14:03.667675 | orchestrator | 2026-02-19 05:14:03.667687 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:14:03.667699 | orchestrator | Thursday 19 February 2026 05:14:03 +0000 (0:00:02.011) 0:01:06.108 ***** 2026-02-19 05:14:03.667713 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:14:03.667727 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:03.667749 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:03.667761 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:14:03.667773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:14:03.667852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023570 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:14:10.023583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:14:10.023593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:14:10.023640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023670 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:14:10.023677 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:14:10.023747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:14:10.023757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023779 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:14:10.023786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:14:10.023794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:14:10.023809 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:14:10.023821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-19 05:14:10.023835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:15:38.687928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:15:38.688045 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:15:38.688061 | orchestrator | 2026-02-19 05:15:38.688073 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:15:38.688084 | orchestrator | Thursday 19 February 2026 05:14:06 +0000 (0:00:02.996) 0:01:09.105 ***** 2026-02-19 05:15:38.688093 | orchestrator | 2026-02-19 05:15:38.688102 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:15:38.688111 | orchestrator | Thursday 19 February 2026 05:14:06 +0000 (0:00:00.457) 0:01:09.562 ***** 2026-02-19 05:15:38.688140 | orchestrator | 2026-02-19 05:15:38.688150 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:15:38.688158 | orchestrator | Thursday 19 February 2026 05:14:07 +0000 (0:00:00.441) 0:01:10.004 ***** 2026-02-19 05:15:38.688167 | orchestrator | 2026-02-19 05:15:38.688238 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:15:38.688250 | orchestrator | Thursday 19 February 2026 05:14:07 +0000 (0:00:00.435) 0:01:10.440 ***** 2026-02-19 05:15:38.688264 | orchestrator | 2026-02-19 05:15:38.688278 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:15:38.688292 | orchestrator | Thursday 19 February 2026 05:14:08 +0000 (0:00:00.450) 0:01:10.890 ***** 2026-02-19 05:15:38.688306 | orchestrator | 2026-02-19 05:15:38.688321 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:15:38.688336 | orchestrator | Thursday 19 February 2026 05:14:08 +0000 (0:00:00.680) 0:01:11.571 ***** 2026-02-19 05:15:38.688345 | orchestrator | 2026-02-19 05:15:38.688354 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-19 05:15:38.688363 | orchestrator | Thursday 19 February 2026 05:14:09 +0000 (0:00:00.433) 0:01:12.005 ***** 2026-02-19 05:15:38.688372 | orchestrator | 2026-02-19 05:15:38.688380 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-19 05:15:38.688389 | orchestrator | Thursday 19 February 2026 05:14:09 +0000 (0:00:00.802) 0:01:12.808 ***** 2026-02-19 05:15:38.688398 | orchestrator | changed: [testbed-manager] 2026-02-19 05:15:38.688407 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:15:38.688415 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:15:38.688424 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:15:38.688433 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:15:38.688443 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:15:38.688453 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:15:38.688463 | orchestrator | 2026-02-19 05:15:38.688473 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-19 05:15:38.688483 | orchestrator | Thursday 19 February 2026 05:14:46 +0000 (0:00:36.805) 0:01:49.613 ***** 2026-02-19 05:15:38.688493 | orchestrator | changed: [testbed-manager] 2026-02-19 05:15:38.688503 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:15:38.688513 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:15:38.688523 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:15:38.688533 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:15:38.688543 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:15:38.688552 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:15:38.688562 | orchestrator | 2026-02-19 05:15:38.688571 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-19 05:15:38.688593 | orchestrator | Thursday 19 February 2026 05:15:23 +0000 (0:00:36.287) 0:02:25.901 ***** 2026-02-19 05:15:38.688603 | orchestrator | ok: [testbed-manager] 2026-02-19 05:15:38.688614 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:15:38.688624 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:15:38.688633 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:15:38.688644 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:15:38.688653 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:15:38.688661 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:15:38.688670 | orchestrator | 2026-02-19 05:15:38.688679 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-19 05:15:38.688687 | orchestrator | Thursday 19 February 2026 05:15:26 +0000 (0:00:02.965) 0:02:28.866 ***** 2026-02-19 05:15:38.688696 | orchestrator | changed: [testbed-manager] 2026-02-19 05:15:38.688704 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:15:38.688713 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:15:38.688721 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:15:38.688730 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:15:38.688738 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:15:38.688747 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:15:38.688765 | orchestrator | 2026-02-19 05:15:38.688774 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:15:38.688797 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:15:38.688808 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:15:38.688817 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:15:38.688825 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:15:38.688852 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:15:38.688861 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:15:38.688870 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:15:38.688879 | orchestrator | 2026-02-19 05:15:38.688887 | orchestrator | 2026-02-19 05:15:38.688896 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:15:38.688905 | orchestrator | Thursday 19 February 2026 05:15:38 +0000 (0:00:12.116) 0:02:40.983 ***** 2026-02-19 05:15:38.688913 | orchestrator | =============================================================================== 2026-02-19 05:15:38.688922 | orchestrator | common : Restart fluentd container ------------------------------------- 36.81s 2026-02-19 05:15:38.688931 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.29s 2026-02-19 05:15:38.688939 | orchestrator | common : Restart cron container ---------------------------------------- 12.12s 2026-02-19 05:15:38.688948 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.50s 2026-02-19 05:15:38.688957 | orchestrator | common : Copying over config.json files for services -------------------- 4.29s 2026-02-19 05:15:38.688965 | orchestrator | service-check-containers : common | Check containers -------------------- 4.25s 2026-02-19 05:15:38.688974 | orchestrator | common : Flush handlers ------------------------------------------------- 3.70s 2026-02-19 05:15:38.688982 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.63s 2026-02-19 05:15:38.688991 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.17s 2026-02-19 05:15:38.689000 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.11s 2026-02-19 05:15:38.689008 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.11s 2026-02-19 05:15:38.689017 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.09s 2026-02-19 05:15:38.689025 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.00s 2026-02-19 05:15:38.689034 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.97s 2026-02-19 05:15:38.689042 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.92s 2026-02-19 05:15:38.689051 | orchestrator | common : include_tasks -------------------------------------------------- 2.91s 2026-02-19 05:15:38.689060 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.84s 2026-02-19 05:15:38.689068 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.84s 2026-02-19 05:15:38.689077 | orchestrator | common : include_tasks -------------------------------------------------- 2.83s 2026-02-19 05:15:38.689085 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.81s 2026-02-19 05:15:38.979720 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-19 05:15:40.970560 | orchestrator | 2026-02-19 05:15:40 | INFO  | Task bd57aeda-5417-4c33-b5dc-0f5a8fe1f4b1 (loadbalancer) was prepared for execution. 2026-02-19 05:15:40.970659 | orchestrator | 2026-02-19 05:15:40 | INFO  | It takes a moment until task bd57aeda-5417-4c33-b5dc-0f5a8fe1f4b1 (loadbalancer) has been started and output is visible here. 2026-02-19 05:16:02.542973 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-19 05:16:02.543053 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-19 05:16:02.543065 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-19 05:16:02.543070 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-19 05:16:02.543079 | orchestrator | 2026-02-19 05:16:02.543086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 05:16:02.543093 | orchestrator | 2026-02-19 05:16:02.543099 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 05:16:02.543108 | orchestrator | Thursday 19 February 2026 05:15:46 +0000 (0:00:01.069) 0:00:01.070 ***** 2026-02-19 05:16:02.543117 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:02.543125 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:02.543132 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:02.543152 | orchestrator | 2026-02-19 05:16:02.543159 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 05:16:02.543166 | orchestrator | Thursday 19 February 2026 05:15:47 +0000 (0:00:00.793) 0:00:01.863 ***** 2026-02-19 05:16:02.543172 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-19 05:16:02.543179 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-19 05:16:02.543227 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-19 05:16:02.543252 | orchestrator | 2026-02-19 05:16:02.543258 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-19 05:16:02.543264 | orchestrator | 2026-02-19 05:16:02.543271 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-19 05:16:02.543277 | orchestrator | Thursday 19 February 2026 05:15:48 +0000 (0:00:00.854) 0:00:02.717 ***** 2026-02-19 05:16:02.543284 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:16:02.543291 | orchestrator | 2026-02-19 05:16:02.543297 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-19 05:16:02.543304 | orchestrator | Thursday 19 February 2026 05:15:49 +0000 (0:00:01.775) 0:00:04.493 ***** 2026-02-19 05:16:02.543310 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:02.543317 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:02.543323 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:02.543330 | orchestrator | 2026-02-19 05:16:02.543336 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-19 05:16:02.543342 | orchestrator | Thursday 19 February 2026 05:15:51 +0000 (0:00:01.051) 0:00:05.545 ***** 2026-02-19 05:16:02.543349 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:02.543355 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:02.543362 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:02.543368 | orchestrator | 2026-02-19 05:16:02.543374 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-19 05:16:02.543381 | orchestrator | Thursday 19 February 2026 05:15:52 +0000 (0:00:00.989) 0:00:06.534 ***** 2026-02-19 05:16:02.543387 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:02.543393 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:02.543399 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:02.543406 | orchestrator | 2026-02-19 05:16:02.543412 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-19 05:16:02.543438 | orchestrator | Thursday 19 February 2026 05:15:52 +0000 (0:00:00.614) 0:00:07.149 ***** 2026-02-19 05:16:02.543445 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:16:02.543452 | orchestrator | 2026-02-19 05:16:02.543459 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-19 05:16:02.543465 | orchestrator | Thursday 19 February 2026 05:15:53 +0000 (0:00:00.996) 0:00:08.146 ***** 2026-02-19 05:16:02.543472 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:02.543478 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:02.543485 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:02.543492 | orchestrator | 2026-02-19 05:16:02.543498 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-19 05:16:02.543505 | orchestrator | Thursday 19 February 2026 05:15:54 +0000 (0:00:00.622) 0:00:08.769 ***** 2026-02-19 05:16:02.543512 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-19 05:16:02.543519 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-19 05:16:02.543525 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-19 05:16:02.543532 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-19 05:16:02.543538 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-19 05:16:02.543545 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-19 05:16:02.543551 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-19 05:16:02.543558 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-19 05:16:02.543564 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-19 05:16:02.543570 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-19 05:16:02.543576 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-19 05:16:02.543597 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-19 05:16:02.543604 | orchestrator | 2026-02-19 05:16:02.543611 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-19 05:16:02.543617 | orchestrator | Thursday 19 February 2026 05:15:57 +0000 (0:00:03.378) 0:00:12.147 ***** 2026-02-19 05:16:02.543624 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-19 05:16:02.543631 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-19 05:16:02.543638 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-19 05:16:02.543645 | orchestrator | 2026-02-19 05:16:02.543652 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-19 05:16:02.543658 | orchestrator | Thursday 19 February 2026 05:15:58 +0000 (0:00:00.920) 0:00:13.067 ***** 2026-02-19 05:16:02.543665 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-19 05:16:02.543671 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-19 05:16:02.543678 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-19 05:16:02.543684 | orchestrator | 2026-02-19 05:16:02.543691 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-19 05:16:02.543698 | orchestrator | Thursday 19 February 2026 05:15:59 +0000 (0:00:01.201) 0:00:14.269 ***** 2026-02-19 05:16:02.543710 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-19 05:16:02.543716 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:02.543723 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-19 05:16:02.543730 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:02.543736 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-19 05:16:02.543743 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:02.543757 | orchestrator | 2026-02-19 05:16:02.543764 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-19 05:16:02.543770 | orchestrator | Thursday 19 February 2026 05:16:00 +0000 (0:00:01.077) 0:00:15.346 ***** 2026-02-19 05:16:02.543779 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:02.543791 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:02.543798 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:02.543805 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:02.543818 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:08.231063 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:08.231166 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:08.231173 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:08.231177 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:08.231181 | orchestrator | 2026-02-19 05:16:08.231186 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-19 05:16:08.231223 | orchestrator | Thursday 19 February 2026 05:16:02 +0000 (0:00:01.715) 0:00:17.062 ***** 2026-02-19 05:16:08.231260 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:08.231267 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:08.231272 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:08.231276 | orchestrator | 2026-02-19 05:16:08.231280 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-19 05:16:08.231284 | orchestrator | Thursday 19 February 2026 05:16:03 +0000 (0:00:00.911) 0:00:17.973 ***** 2026-02-19 05:16:08.231288 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-19 05:16:08.231293 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-19 05:16:08.231297 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-19 05:16:08.231301 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-19 05:16:08.231305 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-19 05:16:08.231309 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-19 05:16:08.231312 | orchestrator | 2026-02-19 05:16:08.231316 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-19 05:16:08.231320 | orchestrator | Thursday 19 February 2026 05:16:05 +0000 (0:00:01.805) 0:00:19.778 ***** 2026-02-19 05:16:08.231324 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:08.231328 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:08.231331 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:08.231335 | orchestrator | 2026-02-19 05:16:08.231339 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-19 05:16:08.231343 | orchestrator | Thursday 19 February 2026 05:16:06 +0000 (0:00:01.191) 0:00:20.970 ***** 2026-02-19 05:16:08.231346 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:08.231350 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:08.231354 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:08.231357 | orchestrator | 2026-02-19 05:16:08.231361 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-19 05:16:08.231365 | orchestrator | Thursday 19 February 2026 05:16:07 +0000 (0:00:01.148) 0:00:22.119 ***** 2026-02-19 05:16:08.231385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 05:16:08.231395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:08.231400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:08.231405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 05:16:08.231409 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:08.231414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 05:16:08.231419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:08.231426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:08.231434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 05:16:11.353430 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:11.353513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 05:16:11.353523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:11.353530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:11.353536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 05:16:11.353557 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:11.353563 | orchestrator | 2026-02-19 05:16:11.353569 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-19 05:16:11.353575 | orchestrator | Thursday 19 February 2026 05:16:08 +0000 (0:00:00.634) 0:00:22.753 ***** 2026-02-19 05:16:11.353592 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:11.353612 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:11.353618 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:11.353624 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:11.353629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:11.353634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 05:16:11.353647 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:11.353652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:11.353695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 05:16:16.835585 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:16.835699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:16.835716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c', '__omit_place_holder__4a532d3151b4c8a409d05a612905cd7c1092e43c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-19 05:16:16.835754 | orchestrator | 2026-02-19 05:16:16.835769 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-19 05:16:16.835782 | orchestrator | Thursday 19 February 2026 05:16:11 +0000 (0:00:03.119) 0:00:25.873 ***** 2026-02-19 05:16:16.835794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:16.835806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:16.835832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:16.835863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:16.835876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:16.835888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:16.835907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:16.835919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:16.835930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:16.835941 | orchestrator | 2026-02-19 05:16:16.835958 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-19 05:16:16.835969 | orchestrator | Thursday 19 February 2026 05:16:15 +0000 (0:00:03.869) 0:00:29.742 ***** 2026-02-19 05:16:16.835981 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-19 05:16:16.835993 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-19 05:16:16.836004 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-19 05:16:16.836014 | orchestrator | 2026-02-19 05:16:16.836026 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-19 05:16:16.836044 | orchestrator | Thursday 19 February 2026 05:16:16 +0000 (0:00:01.616) 0:00:31.358 ***** 2026-02-19 05:16:33.382999 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-19 05:16:33.383137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-19 05:16:33.383166 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-19 05:16:33.383185 | orchestrator | 2026-02-19 05:16:33.383272 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-19 05:16:33.383294 | orchestrator | Thursday 19 February 2026 05:16:20 +0000 (0:00:03.226) 0:00:34.585 ***** 2026-02-19 05:16:33.383314 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:33.383335 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:33.383355 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:33.383375 | orchestrator | 2026-02-19 05:16:33.383395 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-19 05:16:33.383413 | orchestrator | Thursday 19 February 2026 05:16:21 +0000 (0:00:01.009) 0:00:35.595 ***** 2026-02-19 05:16:33.383470 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-19 05:16:33.383493 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-19 05:16:33.383513 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-19 05:16:33.383533 | orchestrator | 2026-02-19 05:16:33.383552 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-19 05:16:33.383572 | orchestrator | Thursday 19 February 2026 05:16:23 +0000 (0:00:02.004) 0:00:37.599 ***** 2026-02-19 05:16:33.383593 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-19 05:16:33.383615 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-19 05:16:33.383636 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-19 05:16:33.383656 | orchestrator | 2026-02-19 05:16:33.383676 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-19 05:16:33.383696 | orchestrator | Thursday 19 February 2026 05:16:24 +0000 (0:00:01.696) 0:00:39.295 ***** 2026-02-19 05:16:33.383716 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:16:33.383736 | orchestrator | 2026-02-19 05:16:33.383755 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-19 05:16:33.383776 | orchestrator | Thursday 19 February 2026 05:16:25 +0000 (0:00:01.084) 0:00:40.380 ***** 2026-02-19 05:16:33.383798 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-19 05:16:33.383818 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-19 05:16:33.383837 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-19 05:16:33.383857 | orchestrator | 2026-02-19 05:16:33.383877 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-19 05:16:33.383894 | orchestrator | Thursday 19 February 2026 05:16:27 +0000 (0:00:01.585) 0:00:41.966 ***** 2026-02-19 05:16:33.383912 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-19 05:16:33.383929 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-19 05:16:33.383946 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-19 05:16:33.383963 | orchestrator | 2026-02-19 05:16:33.383981 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-19 05:16:33.383999 | orchestrator | Thursday 19 February 2026 05:16:29 +0000 (0:00:01.618) 0:00:43.584 ***** 2026-02-19 05:16:33.384018 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:33.384036 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:33.384054 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:33.384071 | orchestrator | 2026-02-19 05:16:33.384088 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-19 05:16:33.384106 | orchestrator | Thursday 19 February 2026 05:16:29 +0000 (0:00:00.315) 0:00:43.900 ***** 2026-02-19 05:16:33.384125 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:33.384144 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:33.384162 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:33.384182 | orchestrator | 2026-02-19 05:16:33.384200 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-19 05:16:33.384250 | orchestrator | Thursday 19 February 2026 05:16:30 +0000 (0:00:00.908) 0:00:44.808 ***** 2026-02-19 05:16:33.384294 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:33.384370 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:33.384393 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:33.384412 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:33.384431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:33.384450 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:33.384476 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:33.384520 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:35.219075 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:35.219163 | orchestrator | 2026-02-19 05:16:35.219176 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-19 05:16:35.219187 | orchestrator | Thursday 19 February 2026 05:16:33 +0000 (0:00:03.089) 0:00:47.897 ***** 2026-02-19 05:16:35.219198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 05:16:35.219257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:35.219268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:35.219278 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:35.219288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 05:16:35.219332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:35.219359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:35.219369 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:35.219378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 05:16:35.219387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:35.219396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:35.219406 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:35.219414 | orchestrator | 2026-02-19 05:16:35.219424 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-19 05:16:35.219433 | orchestrator | Thursday 19 February 2026 05:16:33 +0000 (0:00:00.603) 0:00:48.501 ***** 2026-02-19 05:16:35.219442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 05:16:35.219461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:35.219477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:42.547965 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:42.548056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 05:16:42.548069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:42.548076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:42.548083 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:42.548090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 05:16:42.548130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:42.548143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:42.548150 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:42.548156 | orchestrator | 2026-02-19 05:16:42.548163 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-19 05:16:42.548173 | orchestrator | Thursday 19 February 2026 05:16:35 +0000 (0:00:01.241) 0:00:49.742 ***** 2026-02-19 05:16:42.548183 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-19 05:16:42.548258 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-19 05:16:42.548273 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-19 05:16:42.548284 | orchestrator | 2026-02-19 05:16:42.548295 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-19 05:16:42.548305 | orchestrator | Thursday 19 February 2026 05:16:36 +0000 (0:00:01.446) 0:00:51.189 ***** 2026-02-19 05:16:42.548315 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-19 05:16:42.548323 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-19 05:16:42.548329 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-19 05:16:42.548335 | orchestrator | 2026-02-19 05:16:42.548341 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-19 05:16:42.548347 | orchestrator | Thursday 19 February 2026 05:16:38 +0000 (0:00:01.466) 0:00:52.656 ***** 2026-02-19 05:16:42.548355 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 05:16:42.548368 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 05:16:42.548383 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 05:16:42.548392 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:42.548401 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-19 05:16:42.548409 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 05:16:42.548419 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:42.548428 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-19 05:16:42.548436 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:42.548454 | orchestrator | 2026-02-19 05:16:42.548462 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-19 05:16:42.548470 | orchestrator | Thursday 19 February 2026 05:16:39 +0000 (0:00:01.568) 0:00:54.225 ***** 2026-02-19 05:16:42.548482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:42.548491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:42.548501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 05:16:42.548522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:43.822660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:43.822765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:16:43.822822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:43.822837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:43.822849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:16:43.822861 | orchestrator | 2026-02-19 05:16:43.822880 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-19 05:16:43.822893 | orchestrator | Thursday 19 February 2026 05:16:42 +0000 (0:00:02.848) 0:00:57.073 ***** 2026-02-19 05:16:43.822905 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:16:43.822917 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:16:43.822929 | orchestrator | } 2026-02-19 05:16:43.822939 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:16:43.822950 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:16:43.822961 | orchestrator | } 2026-02-19 05:16:43.822972 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:16:43.822983 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:16:43.822993 | orchestrator | } 2026-02-19 05:16:43.823004 | orchestrator | 2026-02-19 05:16:43.823015 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:16:43.823026 | orchestrator | Thursday 19 February 2026 05:16:42 +0000 (0:00:00.318) 0:00:57.392 ***** 2026-02-19 05:16:43.823054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 05:16:43.823067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:43.823087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:43.823099 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:43.823111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 05:16:43.823122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:43.823139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:43.823151 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:43.823163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 05:16:43.823185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:16:48.253377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:16:48.253510 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:48.253539 | orchestrator | 2026-02-19 05:16:48.253561 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-19 05:16:48.253581 | orchestrator | Thursday 19 February 2026 05:16:43 +0000 (0:00:00.945) 0:00:58.338 ***** 2026-02-19 05:16:48.253602 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:16:48.253620 | orchestrator | 2026-02-19 05:16:48.253641 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-19 05:16:48.253653 | orchestrator | Thursday 19 February 2026 05:16:44 +0000 (0:00:01.021) 0:00:59.359 ***** 2026-02-19 05:16:48.253668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:16:48.253704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 05:16:48.253718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.253731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.253788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:16:48.253803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 05:16:48.253816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.253836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.253849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:16:48.253879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 05:16:48.957004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.957115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.957130 | orchestrator | 2026-02-19 05:16:48.957144 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-19 05:16:48.957156 | orchestrator | Thursday 19 February 2026 05:16:48 +0000 (0:00:03.523) 0:01:02.882 ***** 2026-02-19 05:16:48.957168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:16:48.957199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 05:16:48.957211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.957321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.957335 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:48.957347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:16:48.957358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 05:16:48.957369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.957384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 05:16:48.957401 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:48.957411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:16:48.957430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-19 05:16:58.028744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.028893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.028915 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:58.028930 | orchestrator | 2026-02-19 05:16:58.028945 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-19 05:16:58.028963 | orchestrator | Thursday 19 February 2026 05:16:49 +0000 (0:00:00.689) 0:01:03.572 ***** 2026-02-19 05:16:58.028979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:16:58.028997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:16:58.029031 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:58.029041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:16:58.029072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:16:58.029080 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:58.029088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:16:58.029097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:16:58.029104 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:16:58.029112 | orchestrator | 2026-02-19 05:16:58.029121 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-19 05:16:58.029129 | orchestrator | Thursday 19 February 2026 05:16:50 +0000 (0:00:01.415) 0:01:04.987 ***** 2026-02-19 05:16:58.029137 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:58.029146 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:58.029155 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:58.029163 | orchestrator | 2026-02-19 05:16:58.029171 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-19 05:16:58.029179 | orchestrator | Thursday 19 February 2026 05:16:51 +0000 (0:00:01.181) 0:01:06.169 ***** 2026-02-19 05:16:58.029187 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:16:58.029194 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:16:58.029202 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:16:58.029210 | orchestrator | 2026-02-19 05:16:58.029307 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-19 05:16:58.029325 | orchestrator | Thursday 19 February 2026 05:16:53 +0000 (0:00:02.044) 0:01:08.213 ***** 2026-02-19 05:16:58.029340 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:16:58.029355 | orchestrator | 2026-02-19 05:16:58.029369 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-19 05:16:58.029382 | orchestrator | Thursday 19 February 2026 05:16:54 +0000 (0:00:00.833) 0:01:09.046 ***** 2026-02-19 05:16:58.029445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:16:58.029469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.029526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.029543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:16:58.029557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.029581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.668702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:16:58.668839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.668858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.668872 | orchestrator | 2026-02-19 05:16:58.668886 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-19 05:16:58.668899 | orchestrator | Thursday 19 February 2026 05:16:58 +0000 (0:00:03.504) 0:01:12.551 ***** 2026-02-19 05:16:58.668914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:16:58.668947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.668993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.669020 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:16:58.669042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:16:58.669055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.669069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:16:58.669079 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:16:58.669096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:17:08.411117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-19 05:17:08.411278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:17:08.411301 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:08.411317 | orchestrator | 2026-02-19 05:17:08.411331 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-19 05:17:08.411343 | orchestrator | Thursday 19 February 2026 05:16:58 +0000 (0:00:00.638) 0:01:13.189 ***** 2026-02-19 05:17:08.411351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:08.411362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:08.411371 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:08.411378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:08.411386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:08.411393 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:08.411400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:08.411408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:08.411415 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:08.411422 | orchestrator | 2026-02-19 05:17:08.411430 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-19 05:17:08.411438 | orchestrator | Thursday 19 February 2026 05:16:59 +0000 (0:00:01.049) 0:01:14.239 ***** 2026-02-19 05:17:08.411445 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:17:08.411453 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:17:08.411460 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:17:08.411489 | orchestrator | 2026-02-19 05:17:08.411496 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-19 05:17:08.411503 | orchestrator | Thursday 19 February 2026 05:17:00 +0000 (0:00:01.276) 0:01:15.516 ***** 2026-02-19 05:17:08.411511 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:17:08.411518 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:17:08.411525 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:17:08.411532 | orchestrator | 2026-02-19 05:17:08.411539 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-19 05:17:08.411546 | orchestrator | Thursday 19 February 2026 05:17:03 +0000 (0:00:02.034) 0:01:17.551 ***** 2026-02-19 05:17:08.411553 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:08.411561 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:08.411568 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:08.411575 | orchestrator | 2026-02-19 05:17:08.411583 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-19 05:17:08.411605 | orchestrator | Thursday 19 February 2026 05:17:03 +0000 (0:00:00.323) 0:01:17.875 ***** 2026-02-19 05:17:08.411612 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:17:08.411620 | orchestrator | 2026-02-19 05:17:08.411627 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-19 05:17:08.411634 | orchestrator | Thursday 19 February 2026 05:17:04 +0000 (0:00:00.872) 0:01:18.747 ***** 2026-02-19 05:17:08.411657 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-19 05:17:08.411671 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-19 05:17:08.411681 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-19 05:17:08.411695 | orchestrator | 2026-02-19 05:17:08.411704 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-19 05:17:08.411713 | orchestrator | Thursday 19 February 2026 05:17:06 +0000 (0:00:02.596) 0:01:21.343 ***** 2026-02-19 05:17:08.411722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-19 05:17:08.411731 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:08.411746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-19 05:17:16.555161 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:16.555349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-19 05:17:16.555386 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:16.555399 | orchestrator | 2026-02-19 05:17:16.555412 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-19 05:17:16.555436 | orchestrator | Thursday 19 February 2026 05:17:08 +0000 (0:00:01.592) 0:01:22.935 ***** 2026-02-19 05:17:16.555449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 05:17:16.555463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 05:17:16.555496 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:16.555509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 05:17:16.555520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 05:17:16.555531 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:16.555543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 05:17:16.555554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-19 05:17:16.555565 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:16.555576 | orchestrator | 2026-02-19 05:17:16.555588 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-19 05:17:16.555599 | orchestrator | Thursday 19 February 2026 05:17:10 +0000 (0:00:02.049) 0:01:24.984 ***** 2026-02-19 05:17:16.555610 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:16.555621 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:16.555632 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:16.555643 | orchestrator | 2026-02-19 05:17:16.555654 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-19 05:17:16.555682 | orchestrator | Thursday 19 February 2026 05:17:10 +0000 (0:00:00.437) 0:01:25.422 ***** 2026-02-19 05:17:16.555694 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:16.555705 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:16.555716 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:16.555727 | orchestrator | 2026-02-19 05:17:16.555738 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-19 05:17:16.555749 | orchestrator | Thursday 19 February 2026 05:17:12 +0000 (0:00:01.275) 0:01:26.698 ***** 2026-02-19 05:17:16.555760 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:17:16.555771 | orchestrator | 2026-02-19 05:17:16.555782 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-19 05:17:16.555793 | orchestrator | Thursday 19 February 2026 05:17:13 +0000 (0:00:00.936) 0:01:27.635 ***** 2026-02-19 05:17:16.555813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:17:16.555836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:17:16.555850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:17:16.555863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:17:16.555898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:17:17.270407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270524 | orchestrator | 2026-02-19 05:17:17.270538 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-19 05:17:17.270550 | orchestrator | Thursday 19 February 2026 05:17:16 +0000 (0:00:03.553) 0:01:31.188 ***** 2026-02-19 05:17:17.270563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:17:17.270576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 05:17:17.270612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 05:17:18.562072 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:18.562188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:17:18.562220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:17:18.562296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 05:17:18.562318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 05:17:18.562338 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:18.562403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:17:18.562493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:17:18.562513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-19 05:17:18.562534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-19 05:17:18.562554 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:18.562573 | orchestrator | 2026-02-19 05:17:18.562595 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-19 05:17:18.562614 | orchestrator | Thursday 19 February 2026 05:17:17 +0000 (0:00:00.714) 0:01:31.903 ***** 2026-02-19 05:17:18.562635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:18.562659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:18.562681 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:18.562701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:18.562736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:18.562756 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:18.562778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:18.562808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:27.141423 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:27.141541 | orchestrator | 2026-02-19 05:17:27.141561 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-19 05:17:27.141575 | orchestrator | Thursday 19 February 2026 05:17:18 +0000 (0:00:01.179) 0:01:33.082 ***** 2026-02-19 05:17:27.141586 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:17:27.141600 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:17:27.141611 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:17:27.141623 | orchestrator | 2026-02-19 05:17:27.141636 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-19 05:17:27.141648 | orchestrator | Thursday 19 February 2026 05:17:19 +0000 (0:00:01.226) 0:01:34.308 ***** 2026-02-19 05:17:27.141661 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:17:27.141674 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:17:27.141686 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:17:27.141698 | orchestrator | 2026-02-19 05:17:27.141709 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-19 05:17:27.141721 | orchestrator | Thursday 19 February 2026 05:17:21 +0000 (0:00:02.047) 0:01:36.356 ***** 2026-02-19 05:17:27.141733 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:27.141746 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:27.141758 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:27.141771 | orchestrator | 2026-02-19 05:17:27.141784 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-19 05:17:27.141797 | orchestrator | Thursday 19 February 2026 05:17:22 +0000 (0:00:00.509) 0:01:36.865 ***** 2026-02-19 05:17:27.141810 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:27.141823 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:27.141836 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:27.141849 | orchestrator | 2026-02-19 05:17:27.141860 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-19 05:17:27.141873 | orchestrator | Thursday 19 February 2026 05:17:22 +0000 (0:00:00.334) 0:01:37.200 ***** 2026-02-19 05:17:27.141886 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:17:27.141900 | orchestrator | 2026-02-19 05:17:27.141912 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-19 05:17:27.141926 | orchestrator | Thursday 19 February 2026 05:17:23 +0000 (0:00:00.797) 0:01:37.997 ***** 2026-02-19 05:17:27.141947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:17:27.141999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 05:17:27.142122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:17:27.142154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 05:17:27.142168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 05:17:27.142181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 05:17:27.142204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 05:17:27.142217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 05:17:27.142300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 05:17:27.142331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:17:28.059668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 05:17:28.059691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.059783 | orchestrator | 2026-02-19 05:17:28.059794 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-19 05:17:28.059804 | orchestrator | Thursday 19 February 2026 05:17:27 +0000 (0:00:03.988) 0:01:41.986 ***** 2026-02-19 05:17:28.059819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:17:28.330656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 05:17:28.330776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.330790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.330802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.330813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.330823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.330835 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:28.331677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:17:28.331726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 05:17:28.331737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.331747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.331757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.331768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:17:28.331795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 05:17:38.736688 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:38.736801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:17:38.736824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-19 05:17:38.736837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-19 05:17:38.736849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-19 05:17:38.736861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-19 05:17:38.736922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:17:38.736937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-19 05:17:38.736949 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:38.736961 | orchestrator | 2026-02-19 05:17:38.736974 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-19 05:17:38.736986 | orchestrator | Thursday 19 February 2026 05:17:28 +0000 (0:00:00.870) 0:01:42.856 ***** 2026-02-19 05:17:38.736999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:38.737014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:38.737026 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:38.737037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:38.737049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:38.737060 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:38.737071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:38.737082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:17:38.737093 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:38.737105 | orchestrator | 2026-02-19 05:17:38.737117 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-19 05:17:38.737128 | orchestrator | Thursday 19 February 2026 05:17:29 +0000 (0:00:01.204) 0:01:44.060 ***** 2026-02-19 05:17:38.737139 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:17:38.737151 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:17:38.737162 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:17:38.737181 | orchestrator | 2026-02-19 05:17:38.737192 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-19 05:17:38.737205 | orchestrator | Thursday 19 February 2026 05:17:30 +0000 (0:00:01.285) 0:01:45.346 ***** 2026-02-19 05:17:38.737218 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:17:38.737230 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:17:38.737271 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:17:38.737284 | orchestrator | 2026-02-19 05:17:38.737301 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-19 05:17:38.737321 | orchestrator | Thursday 19 February 2026 05:17:32 +0000 (0:00:02.095) 0:01:47.442 ***** 2026-02-19 05:17:38.737342 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:38.737371 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:38.737393 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:38.737412 | orchestrator | 2026-02-19 05:17:38.737431 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-19 05:17:38.737452 | orchestrator | Thursday 19 February 2026 05:17:33 +0000 (0:00:00.324) 0:01:47.767 ***** 2026-02-19 05:17:38.737468 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:17:38.737485 | orchestrator | 2026-02-19 05:17:38.737506 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-19 05:17:38.737525 | orchestrator | Thursday 19 February 2026 05:17:34 +0000 (0:00:00.982) 0:01:48.749 ***** 2026-02-19 05:17:38.737573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 05:17:38.861939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 05:17:38.862112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 05:17:38.862141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 05:17:38.862159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-19 05:17:38.862173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 05:17:42.338999 | orchestrator | 2026-02-19 05:17:42.339121 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-19 05:17:42.339139 | orchestrator | Thursday 19 February 2026 05:17:38 +0000 (0:00:04.641) 0:01:53.390 ***** 2026-02-19 05:17:42.339187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 05:17:42.339924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 05:17:42.339995 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:42.340065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 05:17:42.340091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 05:17:42.340125 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:42.340162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-19 05:17:53.309579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-19 05:17:53.309743 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:53.309764 | orchestrator | 2026-02-19 05:17:53.309779 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-19 05:17:53.309796 | orchestrator | Thursday 19 February 2026 05:17:42 +0000 (0:00:03.566) 0:01:56.956 ***** 2026-02-19 05:17:53.309812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 05:17:53.309828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 05:17:53.309843 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:53.309858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 05:17:53.309903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 05:17:53.309919 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:53.309934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 05:17:53.309949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-19 05:17:53.309964 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:53.309979 | orchestrator | 2026-02-19 05:17:53.309994 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-19 05:17:53.310084 | orchestrator | Thursday 19 February 2026 05:17:45 +0000 (0:00:03.294) 0:02:00.251 ***** 2026-02-19 05:17:53.310118 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:17:53.310135 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:17:53.310150 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:17:53.310166 | orchestrator | 2026-02-19 05:17:53.310182 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-19 05:17:53.310196 | orchestrator | Thursday 19 February 2026 05:17:46 +0000 (0:00:01.169) 0:02:01.420 ***** 2026-02-19 05:17:53.310210 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:17:53.310224 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:17:53.310239 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:17:53.310274 | orchestrator | 2026-02-19 05:17:53.310289 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-19 05:17:53.310304 | orchestrator | Thursday 19 February 2026 05:17:48 +0000 (0:00:01.880) 0:02:03.301 ***** 2026-02-19 05:17:53.310318 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:17:53.310333 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:17:53.310347 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:17:53.310362 | orchestrator | 2026-02-19 05:17:53.310377 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-19 05:17:53.310391 | orchestrator | Thursday 19 February 2026 05:17:49 +0000 (0:00:00.418) 0:02:03.719 ***** 2026-02-19 05:17:53.310406 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:17:53.310420 | orchestrator | 2026-02-19 05:17:53.310435 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-19 05:17:53.310449 | orchestrator | Thursday 19 February 2026 05:17:49 +0000 (0:00:00.795) 0:02:04.514 ***** 2026-02-19 05:17:53.310466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:17:53.310500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:18:03.297166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:18:03.297362 | orchestrator | 2026-02-19 05:18:03.297384 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-19 05:18:03.297399 | orchestrator | Thursday 19 February 2026 05:17:53 +0000 (0:00:03.313) 0:02:07.828 ***** 2026-02-19 05:18:03.297419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:03.297437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:03.297498 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:03.297523 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:03.297542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:03.297560 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:03.297578 | orchestrator | 2026-02-19 05:18:03.297597 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-19 05:18:03.297615 | orchestrator | Thursday 19 February 2026 05:17:53 +0000 (0:00:00.635) 0:02:08.463 ***** 2026-02-19 05:18:03.297635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:03.297677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:03.297698 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:03.297743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:03.297758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:03.297784 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:03.297798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:03.297811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:03.297824 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:03.297836 | orchestrator | 2026-02-19 05:18:03.297849 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-19 05:18:03.297862 | orchestrator | Thursday 19 February 2026 05:17:54 +0000 (0:00:00.676) 0:02:09.140 ***** 2026-02-19 05:18:03.297879 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:03.297904 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:03.297931 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:03.297949 | orchestrator | 2026-02-19 05:18:03.297969 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-19 05:18:03.297990 | orchestrator | Thursday 19 February 2026 05:17:55 +0000 (0:00:01.197) 0:02:10.338 ***** 2026-02-19 05:18:03.298008 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:03.298145 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:03.298157 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:03.298168 | orchestrator | 2026-02-19 05:18:03.298179 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-19 05:18:03.298190 | orchestrator | Thursday 19 February 2026 05:17:58 +0000 (0:00:02.369) 0:02:12.707 ***** 2026-02-19 05:18:03.298201 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:03.298212 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:03.298223 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:03.298234 | orchestrator | 2026-02-19 05:18:03.298245 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-19 05:18:03.298332 | orchestrator | Thursday 19 February 2026 05:17:58 +0000 (0:00:00.325) 0:02:13.032 ***** 2026-02-19 05:18:03.298358 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:18:03.298379 | orchestrator | 2026-02-19 05:18:03.298399 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-19 05:18:03.298420 | orchestrator | Thursday 19 February 2026 05:17:59 +0000 (0:00:00.918) 0:02:13.951 ***** 2026-02-19 05:18:03.298476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 05:18:03.972101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 05:18:03.972252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-19 05:18:03.972334 | orchestrator | 2026-02-19 05:18:03.972346 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-19 05:18:03.972357 | orchestrator | Thursday 19 February 2026 05:18:03 +0000 (0:00:03.870) 0:02:17.822 ***** 2026-02-19 05:18:03.972367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 05:18:03.972378 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:03.972468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 05:18:08.961784 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:08.961877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-19 05:18:08.961911 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:08.961920 | orchestrator | 2026-02-19 05:18:08.961929 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-19 05:18:08.961937 | orchestrator | Thursday 19 February 2026 05:18:03 +0000 (0:00:00.676) 0:02:18.499 ***** 2026-02-19 05:18:08.961958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-19 05:18:08.961969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 05:18:08.961979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-19 05:18:08.961989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 05:18:08.961997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-19 05:18:08.962005 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:08.962079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-19 05:18:08.962089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 05:18:08.962097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-19 05:18:08.962104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 05:18:08.962112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-19 05:18:08.962127 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:08.962135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-19 05:18:08.962142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 05:18:08.962150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-19 05:18:08.962216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-19 05:18:08.962233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-19 05:18:08.962242 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:08.962249 | orchestrator | 2026-02-19 05:18:08.962257 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-19 05:18:08.962294 | orchestrator | Thursday 19 February 2026 05:18:05 +0000 (0:00:01.174) 0:02:19.674 ***** 2026-02-19 05:18:08.962302 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:08.962310 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:08.962317 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:08.962327 | orchestrator | 2026-02-19 05:18:08.962336 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-19 05:18:08.962344 | orchestrator | Thursday 19 February 2026 05:18:06 +0000 (0:00:01.222) 0:02:20.896 ***** 2026-02-19 05:18:08.962352 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:08.962361 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:08.962369 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:08.962378 | orchestrator | 2026-02-19 05:18:08.962386 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-19 05:18:08.962395 | orchestrator | Thursday 19 February 2026 05:18:08 +0000 (0:00:02.067) 0:02:22.963 ***** 2026-02-19 05:18:08.962404 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:08.962413 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:08.962421 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:08.962433 | orchestrator | 2026-02-19 05:18:08.962446 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-19 05:18:08.962459 | orchestrator | Thursday 19 February 2026 05:18:08 +0000 (0:00:00.335) 0:02:23.299 ***** 2026-02-19 05:18:08.962476 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:15.176191 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:15.176372 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:15.176396 | orchestrator | 2026-02-19 05:18:15.176413 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-19 05:18:15.176433 | orchestrator | Thursday 19 February 2026 05:18:09 +0000 (0:00:00.316) 0:02:23.615 ***** 2026-02-19 05:18:15.176448 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:18:15.176491 | orchestrator | 2026-02-19 05:18:15.176506 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-19 05:18:15.176522 | orchestrator | Thursday 19 February 2026 05:18:10 +0000 (0:00:01.240) 0:02:24.856 ***** 2026-02-19 05:18:15.176544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-19 05:18:15.176566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 05:18:15.176600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 05:18:15.176617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-19 05:18:15.176656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 05:18:15.176684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 05:18:15.176700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-19 05:18:15.176723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 05:18:15.176740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 05:18:15.176756 | orchestrator | 2026-02-19 05:18:15.176774 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-19 05:18:15.176791 | orchestrator | Thursday 19 February 2026 05:18:13 +0000 (0:00:03.656) 0:02:28.513 ***** 2026-02-19 05:18:15.176818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-19 05:18:16.088640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 05:18:16.088732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 05:18:16.088743 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:16.088769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-19 05:18:16.088778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 05:18:16.088785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 05:18:16.088811 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:16.088835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-19 05:18:16.088844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-19 05:18:16.088851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-19 05:18:16.088862 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:16.088870 | orchestrator | 2026-02-19 05:18:16.088878 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-19 05:18:16.088886 | orchestrator | Thursday 19 February 2026 05:18:15 +0000 (0:00:01.187) 0:02:29.700 ***** 2026-02-19 05:18:16.088895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-19 05:18:16.088905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-19 05:18:16.088914 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:16.088926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-19 05:18:16.088933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-19 05:18:16.088940 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:16.088947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-19 05:18:16.088954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-19 05:18:16.088961 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:16.088968 | orchestrator | 2026-02-19 05:18:16.088975 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-19 05:18:16.088986 | orchestrator | Thursday 19 February 2026 05:18:16 +0000 (0:00:00.910) 0:02:30.610 ***** 2026-02-19 05:18:25.399326 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:25.399404 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:25.399409 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:25.399414 | orchestrator | 2026-02-19 05:18:25.399420 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-19 05:18:25.399425 | orchestrator | Thursday 19 February 2026 05:18:17 +0000 (0:00:01.210) 0:02:31.821 ***** 2026-02-19 05:18:25.399429 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:25.399433 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:25.399437 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:25.399441 | orchestrator | 2026-02-19 05:18:25.399445 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-19 05:18:25.399449 | orchestrator | Thursday 19 February 2026 05:18:19 +0000 (0:00:02.126) 0:02:33.947 ***** 2026-02-19 05:18:25.399453 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:25.399458 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:25.399462 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:25.399466 | orchestrator | 2026-02-19 05:18:25.399470 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-19 05:18:25.399473 | orchestrator | Thursday 19 February 2026 05:18:19 +0000 (0:00:00.520) 0:02:34.468 ***** 2026-02-19 05:18:25.399477 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:18:25.399481 | orchestrator | 2026-02-19 05:18:25.399486 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-19 05:18:25.399490 | orchestrator | Thursday 19 February 2026 05:18:20 +0000 (0:00:01.004) 0:02:35.473 ***** 2026-02-19 05:18:25.399508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:18:25.399529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:18:25.399535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:18:25.399549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:18:25.399554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:18:25.399561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:18:25.399572 | orchestrator | 2026-02-19 05:18:25.399576 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-19 05:18:25.399580 | orchestrator | Thursday 19 February 2026 05:18:24 +0000 (0:00:03.709) 0:02:39.182 ***** 2026-02-19 05:18:25.399584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:25.399591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:18:34.641126 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:34.641212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:34.641237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:18:34.641261 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:34.641269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:34.641317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:18:34.641323 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:34.641330 | orchestrator | 2026-02-19 05:18:34.641337 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-19 05:18:34.641344 | orchestrator | Thursday 19 February 2026 05:18:25 +0000 (0:00:00.744) 0:02:39.926 ***** 2026-02-19 05:18:34.641361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:34.641371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:34.641379 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:34.641385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:34.641391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:34.641397 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:34.641408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:34.641414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:34.641423 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:34.641433 | orchestrator | 2026-02-19 05:18:34.641441 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-19 05:18:34.641450 | orchestrator | Thursday 19 February 2026 05:18:26 +0000 (0:00:00.938) 0:02:40.865 ***** 2026-02-19 05:18:34.641458 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:34.641468 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:34.641477 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:34.641485 | orchestrator | 2026-02-19 05:18:34.641493 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-19 05:18:34.641507 | orchestrator | Thursday 19 February 2026 05:18:27 +0000 (0:00:01.501) 0:02:42.366 ***** 2026-02-19 05:18:34.641515 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:34.641523 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:34.641531 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:34.641541 | orchestrator | 2026-02-19 05:18:34.641551 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-19 05:18:34.641560 | orchestrator | Thursday 19 February 2026 05:18:29 +0000 (0:00:02.093) 0:02:44.460 ***** 2026-02-19 05:18:34.641570 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:18:34.641580 | orchestrator | 2026-02-19 05:18:34.641602 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-19 05:18:34.641612 | orchestrator | Thursday 19 February 2026 05:18:31 +0000 (0:00:01.101) 0:02:45.562 ***** 2026-02-19 05:18:34.641633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:18:34.641646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:18:34.641668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 05:18:35.328476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 05:18:35.328622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:18:35.328645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:18:35.328656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:18:35.328666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:18:35.328787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 05:18:35.328805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 05:18:35.328821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 05:18:35.328831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 05:18:35.328840 | orchestrator | 2026-02-19 05:18:35.328851 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-19 05:18:35.328862 | orchestrator | Thursday 19 February 2026 05:18:34 +0000 (0:00:03.704) 0:02:49.266 ***** 2026-02-19 05:18:35.328873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:35.328897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.596736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.596875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.596900 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:36.596921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:36.596939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.596957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.597019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.597037 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:36.597052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:18:36.597074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.597092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.597109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-19 05:18:36.597133 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:36.597149 | orchestrator | 2026-02-19 05:18:36.597168 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-19 05:18:36.597185 | orchestrator | Thursday 19 February 2026 05:18:35 +0000 (0:00:00.681) 0:02:49.948 ***** 2026-02-19 05:18:36.597204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:36.597224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:36.597241 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:36.597258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:36.597313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:47.472591 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:47.472685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:47.472697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:18:47.472708 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:47.472716 | orchestrator | 2026-02-19 05:18:47.472724 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-19 05:18:47.472732 | orchestrator | Thursday 19 February 2026 05:18:36 +0000 (0:00:01.168) 0:02:51.116 ***** 2026-02-19 05:18:47.472738 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:47.472745 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:47.472751 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:47.472758 | orchestrator | 2026-02-19 05:18:47.472763 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-19 05:18:47.472783 | orchestrator | Thursday 19 February 2026 05:18:37 +0000 (0:00:01.281) 0:02:52.397 ***** 2026-02-19 05:18:47.472790 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:47.472796 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:47.472801 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:47.472808 | orchestrator | 2026-02-19 05:18:47.472814 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-19 05:18:47.472820 | orchestrator | Thursday 19 February 2026 05:18:40 +0000 (0:00:02.156) 0:02:54.554 ***** 2026-02-19 05:18:47.472826 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:18:47.472831 | orchestrator | 2026-02-19 05:18:47.472837 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-19 05:18:47.472844 | orchestrator | Thursday 19 February 2026 05:18:41 +0000 (0:00:01.445) 0:02:56.000 ***** 2026-02-19 05:18:47.472850 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:18:47.472856 | orchestrator | 2026-02-19 05:18:47.472862 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-19 05:18:47.472868 | orchestrator | Thursday 19 February 2026 05:18:45 +0000 (0:00:03.623) 0:02:59.624 ***** 2026-02-19 05:18:47.472898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:18:47.472922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 05:18:47.472928 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:47.472979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:18:47.472994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 05:18:47.472998 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:47.473008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:18:49.581974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 05:18:49.582176 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:49.582212 | orchestrator | 2026-02-19 05:18:49.582236 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-19 05:18:49.582256 | orchestrator | Thursday 19 February 2026 05:18:47 +0000 (0:00:02.368) 0:03:01.992 ***** 2026-02-19 05:18:49.582344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:18:49.582461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 05:18:49.582484 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:49.582548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:18:49.582584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 05:18:49.582605 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:49.582626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:18:49.582658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-19 05:18:58.678276 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:58.678465 | orchestrator | 2026-02-19 05:18:58.678486 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-19 05:18:58.678502 | orchestrator | Thursday 19 February 2026 05:18:49 +0000 (0:00:02.111) 0:03:04.104 ***** 2026-02-19 05:18:58.678537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 05:18:58.678588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 05:18:58.678605 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:58.678616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 05:18:58.678625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 05:18:58.678634 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:58.678644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 05:18:58.678653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-19 05:18:58.678662 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:58.678671 | orchestrator | 2026-02-19 05:18:58.678680 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-19 05:18:58.678689 | orchestrator | Thursday 19 February 2026 05:18:52 +0000 (0:00:02.533) 0:03:06.637 ***** 2026-02-19 05:18:58.678698 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:18:58.678728 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:18:58.678754 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:18:58.678768 | orchestrator | 2026-02-19 05:18:58.678783 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-19 05:18:58.678799 | orchestrator | Thursday 19 February 2026 05:18:53 +0000 (0:00:01.640) 0:03:08.278 ***** 2026-02-19 05:18:58.678815 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:58.678832 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:58.678848 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:58.678858 | orchestrator | 2026-02-19 05:18:58.678869 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-19 05:18:58.678884 | orchestrator | Thursday 19 February 2026 05:18:55 +0000 (0:00:01.483) 0:03:09.761 ***** 2026-02-19 05:18:58.678895 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:18:58.678905 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:18:58.678915 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:18:58.678924 | orchestrator | 2026-02-19 05:18:58.678935 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-19 05:18:58.678945 | orchestrator | Thursday 19 February 2026 05:18:55 +0000 (0:00:00.355) 0:03:10.116 ***** 2026-02-19 05:18:58.678955 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:18:58.678965 | orchestrator | 2026-02-19 05:18:58.678975 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-19 05:18:58.678985 | orchestrator | Thursday 19 February 2026 05:18:56 +0000 (0:00:01.373) 0:03:11.489 ***** 2026-02-19 05:18:58.679000 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 05:18:58.679020 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 05:18:58.679038 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 05:18:58.679053 | orchestrator | 2026-02-19 05:18:58.679068 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-19 05:18:58.679093 | orchestrator | Thursday 19 February 2026 05:18:58 +0000 (0:00:01.479) 0:03:12.969 ***** 2026-02-19 05:18:58.679117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 05:19:07.877401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 05:19:07.877497 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:07.877510 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:07.877519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 05:19:07.877527 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:07.877534 | orchestrator | 2026-02-19 05:19:07.877543 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-19 05:19:07.877551 | orchestrator | Thursday 19 February 2026 05:18:58 +0000 (0:00:00.436) 0:03:13.406 ***** 2026-02-19 05:19:07.877573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-19 05:19:07.877582 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:07.877599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-19 05:19:07.877606 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:07.877614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-19 05:19:07.877621 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:07.877645 | orchestrator | 2026-02-19 05:19:07.877653 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-19 05:19:07.877660 | orchestrator | Thursday 19 February 2026 05:18:59 +0000 (0:00:00.923) 0:03:14.330 ***** 2026-02-19 05:19:07.877667 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:07.877674 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:07.877681 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:07.877689 | orchestrator | 2026-02-19 05:19:07.877696 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-19 05:19:07.877703 | orchestrator | Thursday 19 February 2026 05:19:00 +0000 (0:00:00.444) 0:03:14.774 ***** 2026-02-19 05:19:07.877710 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:07.877718 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:07.877725 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:07.877732 | orchestrator | 2026-02-19 05:19:07.877739 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-19 05:19:07.877746 | orchestrator | Thursday 19 February 2026 05:19:01 +0000 (0:00:01.664) 0:03:16.439 ***** 2026-02-19 05:19:07.877753 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:07.877760 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:07.877768 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:07.877775 | orchestrator | 2026-02-19 05:19:07.877782 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-19 05:19:07.877812 | orchestrator | Thursday 19 February 2026 05:19:02 +0000 (0:00:00.568) 0:03:17.008 ***** 2026-02-19 05:19:07.877820 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:19:07.877827 | orchestrator | 2026-02-19 05:19:07.877835 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-19 05:19:07.877842 | orchestrator | Thursday 19 February 2026 05:19:03 +0000 (0:00:01.180) 0:03:18.188 ***** 2026-02-19 05:19:07.877874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:07.877888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:07.877899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-19 05:19:07.877915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-19 05:19:07.877936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.117269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:08.117469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:08.117487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 05:19:08.117522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:08.117534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.117552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-19 05:19:08.117611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:08.117634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.117654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 05:19:08.117719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:08.117732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:08.117757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.237215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-19 05:19:08.237461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-19 05:19:08.237485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.237500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:08.237530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:08.237563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 05:19:08.237576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:08.237598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.237612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:08.237633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-19 05:19:08.237686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.517538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:08.517675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-19 05:19:08.517694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.517708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-19 05:19:08.517764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 05:19:08.517780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:08.517800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:08.517813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:08.517825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:08.517837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 05:19:08.517854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:08.517875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.704582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-19 05:19:09.704703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:09.704722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.704738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 05:19:09.704768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:09.704800 | orchestrator | 2026-02-19 05:19:09.704813 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-19 05:19:09.704825 | orchestrator | Thursday 19 February 2026 05:19:08 +0000 (0:00:04.849) 0:03:23.038 ***** 2026-02-19 05:19:09.704857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:09.704870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.704881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-19 05:19:09.704897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-19 05:19:09.704923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.754841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:09.754920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:09.754931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 05:19:09.754940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:09.754962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:09.754999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.755008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-19 05:19:09.755014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.755021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:09.755031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:09.755043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-19 05:19:09.755053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.856161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.856271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-19 05:19:09.856448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-19 05:19:09.856498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 05:19:09.856534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-19 05:19:09.856548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.856561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:09.856580 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:09.856637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:09.856652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:09.856665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:09.856684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:10.038682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:10.038767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 05:19:10.038778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-19 05:19:10.038818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:10.038828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:10.038850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:10.038860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:10.038869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-19 05:19:10.038887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-19 05:19:10.038899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:10.038908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-19 05:19:10.038915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:10.038930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-19 05:19:20.207813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 05:19:20.207932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-19 05:19:20.207944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:20.207952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-19 05:19:20.207959 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:20.207967 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:20.207974 | orchestrator | 2026-02-19 05:19:20.207981 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-19 05:19:20.207989 | orchestrator | Thursday 19 February 2026 05:19:10 +0000 (0:00:01.521) 0:03:24.559 ***** 2026-02-19 05:19:20.207996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:20.208006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:20.208026 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:20.208033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:20.208040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:20.208057 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:20.208064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:20.208070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:20.208076 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:20.208083 | orchestrator | 2026-02-19 05:19:20.208089 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-19 05:19:20.208095 | orchestrator | Thursday 19 February 2026 05:19:11 +0000 (0:00:01.500) 0:03:26.059 ***** 2026-02-19 05:19:20.208102 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:19:20.208109 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:19:20.208115 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:19:20.208121 | orchestrator | 2026-02-19 05:19:20.208127 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-19 05:19:20.208134 | orchestrator | Thursday 19 February 2026 05:19:13 +0000 (0:00:01.523) 0:03:27.582 ***** 2026-02-19 05:19:20.208140 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:19:20.208146 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:19:20.208153 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:19:20.208159 | orchestrator | 2026-02-19 05:19:20.208165 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-19 05:19:20.208175 | orchestrator | Thursday 19 February 2026 05:19:15 +0000 (0:00:02.180) 0:03:29.763 ***** 2026-02-19 05:19:20.208181 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:19:20.208188 | orchestrator | 2026-02-19 05:19:20.208194 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-19 05:19:20.208200 | orchestrator | Thursday 19 February 2026 05:19:16 +0000 (0:00:01.271) 0:03:31.034 ***** 2026-02-19 05:19:20.208207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-19 05:19:20.208219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-19 05:19:32.146435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-19 05:19:32.146518 | orchestrator | 2026-02-19 05:19:32.146526 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-19 05:19:32.146531 | orchestrator | Thursday 19 February 2026 05:19:20 +0000 (0:00:03.695) 0:03:34.730 ***** 2026-02-19 05:19:32.146548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-19 05:19:32.146553 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:32.146559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-19 05:19:32.146564 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:32.146580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-19 05:19:32.146599 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:32.146604 | orchestrator | 2026-02-19 05:19:32.146608 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-19 05:19:32.146613 | orchestrator | Thursday 19 February 2026 05:19:20 +0000 (0:00:00.538) 0:03:35.268 ***** 2026-02-19 05:19:32.146619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:19:32.146626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:19:32.146632 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:32.146636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:19:32.146643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:19:32.146648 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:32.146652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:19:32.146657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:19:32.146661 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:32.146665 | orchestrator | 2026-02-19 05:19:32.146669 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-19 05:19:32.146674 | orchestrator | Thursday 19 February 2026 05:19:21 +0000 (0:00:01.030) 0:03:36.298 ***** 2026-02-19 05:19:32.146678 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:19:32.146683 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:19:32.146687 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:19:32.146691 | orchestrator | 2026-02-19 05:19:32.146696 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-19 05:19:32.146700 | orchestrator | Thursday 19 February 2026 05:19:23 +0000 (0:00:01.319) 0:03:37.618 ***** 2026-02-19 05:19:32.146704 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:19:32.146708 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:19:32.146713 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:19:32.146721 | orchestrator | 2026-02-19 05:19:32.146725 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-19 05:19:32.146729 | orchestrator | Thursday 19 February 2026 05:19:25 +0000 (0:00:02.063) 0:03:39.681 ***** 2026-02-19 05:19:32.146733 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:19:32.146738 | orchestrator | 2026-02-19 05:19:32.146743 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-19 05:19:32.146747 | orchestrator | Thursday 19 February 2026 05:19:26 +0000 (0:00:01.524) 0:03:41.206 ***** 2026-02-19 05:19:32.146755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:32.283878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:32.283977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:32.283991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:32.284034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:32.284047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.284059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.284072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.284082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.284098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:19:32.284115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.992798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.992932 | orchestrator | 2026-02-19 05:19:32.992962 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-19 05:19:32.992983 | orchestrator | Thursday 19 February 2026 05:19:32 +0000 (0:00:05.606) 0:03:46.812 ***** 2026-02-19 05:19:32.993029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:32.993070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:32.993085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.993119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.993132 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:32.993152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:32.993165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:32.993185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.993197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:19:32.993209 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:32.993229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:45.508706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:19:45.508851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-19 05:19:45.508871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-19 05:19:45.508885 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:45.508899 | orchestrator | 2026-02-19 05:19:45.508911 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-19 05:19:45.508923 | orchestrator | Thursday 19 February 2026 05:19:33 +0000 (0:00:00.852) 0:03:47.665 ***** 2026-02-19 05:19:45.508935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.508948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.508961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.508974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.508985 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:45.508996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.509026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.509039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.509065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.509077 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:45.509088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.509099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.509110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.509121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:19:45.509132 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:45.509143 | orchestrator | 2026-02-19 05:19:45.509156 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-19 05:19:45.509168 | orchestrator | Thursday 19 February 2026 05:19:34 +0000 (0:00:01.602) 0:03:49.267 ***** 2026-02-19 05:19:45.509181 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:19:45.509194 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:19:45.509206 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:19:45.509218 | orchestrator | 2026-02-19 05:19:45.509231 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-19 05:19:45.509243 | orchestrator | Thursday 19 February 2026 05:19:36 +0000 (0:00:01.363) 0:03:50.631 ***** 2026-02-19 05:19:45.509254 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:19:45.509265 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:19:45.509275 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:19:45.509286 | orchestrator | 2026-02-19 05:19:45.509297 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-19 05:19:45.509307 | orchestrator | Thursday 19 February 2026 05:19:38 +0000 (0:00:02.174) 0:03:52.805 ***** 2026-02-19 05:19:45.509344 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:19:45.509355 | orchestrator | 2026-02-19 05:19:45.509366 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-19 05:19:45.509377 | orchestrator | Thursday 19 February 2026 05:19:40 +0000 (0:00:01.950) 0:03:54.756 ***** 2026-02-19 05:19:45.509387 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-19 05:19:45.509399 | orchestrator | 2026-02-19 05:19:45.509410 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-19 05:19:45.509421 | orchestrator | Thursday 19 February 2026 05:19:41 +0000 (0:00:00.936) 0:03:55.692 ***** 2026-02-19 05:19:45.509432 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-19 05:19:45.509445 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-19 05:19:45.509474 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-19 05:19:57.098692 | orchestrator | 2026-02-19 05:19:57.098808 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-19 05:19:57.098844 | orchestrator | Thursday 19 February 2026 05:19:45 +0000 (0:00:04.312) 0:04:00.005 ***** 2026-02-19 05:19:57.098861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:19:57.098876 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:57.098889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:19:57.098901 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:57.098913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:19:57.098925 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:57.098937 | orchestrator | 2026-02-19 05:19:57.098948 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-19 05:19:57.098959 | orchestrator | Thursday 19 February 2026 05:19:46 +0000 (0:00:01.512) 0:04:01.517 ***** 2026-02-19 05:19:57.098973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 05:19:57.098987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 05:19:57.099000 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:57.099011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 05:19:57.099051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 05:19:57.099063 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:57.099074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 05:19:57.099085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-19 05:19:57.099096 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:57.099107 | orchestrator | 2026-02-19 05:19:57.099118 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-19 05:19:57.099129 | orchestrator | Thursday 19 February 2026 05:19:48 +0000 (0:00:01.681) 0:04:03.199 ***** 2026-02-19 05:19:57.099140 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:19:57.099152 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:19:57.099162 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:19:57.099173 | orchestrator | 2026-02-19 05:19:57.099184 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-19 05:19:57.099195 | orchestrator | Thursday 19 February 2026 05:19:51 +0000 (0:00:02.610) 0:04:05.810 ***** 2026-02-19 05:19:57.099205 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:19:57.099216 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:19:57.099244 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:19:57.099256 | orchestrator | 2026-02-19 05:19:57.099267 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-19 05:19:57.099284 | orchestrator | Thursday 19 February 2026 05:19:53 +0000 (0:00:02.482) 0:04:08.292 ***** 2026-02-19 05:19:57.099296 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-19 05:19:57.099309 | orchestrator | 2026-02-19 05:19:57.099369 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-19 05:19:57.099381 | orchestrator | Thursday 19 February 2026 05:19:54 +0000 (0:00:00.918) 0:04:09.210 ***** 2026-02-19 05:19:57.099394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:19:57.099407 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:57.099419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:19:57.099430 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:57.099442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:19:57.099461 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:19:57.099472 | orchestrator | 2026-02-19 05:19:57.099483 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-19 05:19:57.099495 | orchestrator | Thursday 19 February 2026 05:19:55 +0000 (0:00:01.188) 0:04:10.399 ***** 2026-02-19 05:19:57.099506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:19:57.099517 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:19:57.099529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:19:57.099540 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:19:57.099559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-19 05:20:21.545176 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:21.545292 | orchestrator | 2026-02-19 05:20:21.545309 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-19 05:20:21.545390 | orchestrator | Thursday 19 February 2026 05:19:57 +0000 (0:00:01.217) 0:04:11.616 ***** 2026-02-19 05:20:21.545405 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:21.545417 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:21.545427 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:21.545438 | orchestrator | 2026-02-19 05:20:21.545450 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-19 05:20:21.545461 | orchestrator | Thursday 19 February 2026 05:19:58 +0000 (0:00:01.409) 0:04:13.026 ***** 2026-02-19 05:20:21.545472 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:20:21.545483 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:20:21.545494 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:20:21.545505 | orchestrator | 2026-02-19 05:20:21.545516 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-19 05:20:21.545526 | orchestrator | Thursday 19 February 2026 05:20:00 +0000 (0:00:02.350) 0:04:15.376 ***** 2026-02-19 05:20:21.545537 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:20:21.545548 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:20:21.545558 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:20:21.545569 | orchestrator | 2026-02-19 05:20:21.545580 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-19 05:20:21.545618 | orchestrator | Thursday 19 February 2026 05:20:04 +0000 (0:00:03.346) 0:04:18.723 ***** 2026-02-19 05:20:21.545630 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-19 05:20:21.545642 | orchestrator | 2026-02-19 05:20:21.545653 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-19 05:20:21.545664 | orchestrator | Thursday 19 February 2026 05:20:05 +0000 (0:00:01.460) 0:04:20.183 ***** 2026-02-19 05:20:21.545676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 05:20:21.545691 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:21.545703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 05:20:21.545714 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:21.545726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 05:20:21.545737 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:21.545748 | orchestrator | 2026-02-19 05:20:21.545759 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-19 05:20:21.545770 | orchestrator | Thursday 19 February 2026 05:20:07 +0000 (0:00:01.454) 0:04:21.638 ***** 2026-02-19 05:20:21.545782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 05:20:21.545792 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:21.545833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 05:20:21.545847 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:21.545859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-19 05:20:21.545878 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:21.545889 | orchestrator | 2026-02-19 05:20:21.545900 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-19 05:20:21.545911 | orchestrator | Thursday 19 February 2026 05:20:08 +0000 (0:00:01.425) 0:04:23.063 ***** 2026-02-19 05:20:21.545922 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:21.545933 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:21.545944 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:21.545954 | orchestrator | 2026-02-19 05:20:21.545965 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-19 05:20:21.545976 | orchestrator | Thursday 19 February 2026 05:20:10 +0000 (0:00:01.998) 0:04:25.061 ***** 2026-02-19 05:20:21.545986 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:20:21.545997 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:20:21.546008 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:20:21.546072 | orchestrator | 2026-02-19 05:20:21.546083 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-19 05:20:21.546094 | orchestrator | Thursday 19 February 2026 05:20:12 +0000 (0:00:02.428) 0:04:27.490 ***** 2026-02-19 05:20:21.546105 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:20:21.546116 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:20:21.546126 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:20:21.546137 | orchestrator | 2026-02-19 05:20:21.546148 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-19 05:20:21.546158 | orchestrator | Thursday 19 February 2026 05:20:16 +0000 (0:00:03.384) 0:04:30.875 ***** 2026-02-19 05:20:21.546169 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:20:21.546180 | orchestrator | 2026-02-19 05:20:21.546190 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-19 05:20:21.546201 | orchestrator | Thursday 19 February 2026 05:20:17 +0000 (0:00:01.629) 0:04:32.504 ***** 2026-02-19 05:20:21.546214 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 05:20:21.546227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 05:20:21.546247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 05:20:21.673070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 05:20:21.673162 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 05:20:21.673175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:20:21.673185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 05:20:21.673194 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-19 05:20:21.673244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 05:20:21.673254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 05:20:21.673263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 05:20:21.673271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:20:21.673280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 05:20:21.673292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 05:20:21.673315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:20:21.673376 | orchestrator | 2026-02-19 05:20:21.673401 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-19 05:20:22.305520 | orchestrator | Thursday 19 February 2026 05:20:21 +0000 (0:00:03.694) 0:04:36.199 ***** 2026-02-19 05:20:22.305716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 05:20:22.305748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 05:20:22.305762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 05:20:22.305775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 05:20:22.305807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:20:22.305819 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:22.305859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 05:20:22.305874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 05:20:22.305886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 05:20:22.305898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 05:20:22.305909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:20:22.305927 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:22.305939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-19 05:20:22.305965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-19 05:20:34.927373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-19 05:20:34.927483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-19 05:20:34.927501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-19 05:20:34.927514 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:34.927540 | orchestrator | 2026-02-19 05:20:34.927579 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-19 05:20:34.927592 | orchestrator | Thursday 19 February 2026 05:20:22 +0000 (0:00:00.773) 0:04:36.972 ***** 2026-02-19 05:20:34.927604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 05:20:34.927618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 05:20:34.927631 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:34.927642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 05:20:34.927653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 05:20:34.927664 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:34.927676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 05:20:34.927697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-19 05:20:34.927708 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:34.927719 | orchestrator | 2026-02-19 05:20:34.927743 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-19 05:20:34.927755 | orchestrator | Thursday 19 February 2026 05:20:23 +0000 (0:00:01.542) 0:04:38.515 ***** 2026-02-19 05:20:34.927766 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:20:34.927778 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:20:34.927788 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:20:34.927799 | orchestrator | 2026-02-19 05:20:34.927810 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-19 05:20:34.927822 | orchestrator | Thursday 19 February 2026 05:20:25 +0000 (0:00:01.215) 0:04:39.730 ***** 2026-02-19 05:20:34.927833 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:20:34.927843 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:20:34.927870 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:20:34.927882 | orchestrator | 2026-02-19 05:20:34.927893 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-19 05:20:34.927904 | orchestrator | Thursday 19 February 2026 05:20:27 +0000 (0:00:02.190) 0:04:41.921 ***** 2026-02-19 05:20:34.927915 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:20:34.927926 | orchestrator | 2026-02-19 05:20:34.927937 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-19 05:20:34.927948 | orchestrator | Thursday 19 February 2026 05:20:29 +0000 (0:00:01.664) 0:04:43.585 ***** 2026-02-19 05:20:34.927961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:20:34.927983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:20:34.927995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:20:34.928021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:20:35.971217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:20:35.971411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:20:35.971434 | orchestrator | 2026-02-19 05:20:35.971449 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-19 05:20:35.971461 | orchestrator | Thursday 19 February 2026 05:20:34 +0000 (0:00:05.859) 0:04:49.445 ***** 2026-02-19 05:20:35.971490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:20:35.971524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:20:35.971547 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:35.971561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:20:35.971574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:20:35.971586 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:35.971602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:20:35.971624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:20:43.461980 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:43.462154 | orchestrator | 2026-02-19 05:20:43.462171 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-19 05:20:43.462183 | orchestrator | Thursday 19 February 2026 05:20:35 +0000 (0:00:01.041) 0:04:50.486 ***** 2026-02-19 05:20:43.462197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:43.462213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-19 05:20:43.462227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-19 05:20:43.462240 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:43.462251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:43.462263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-19 05:20:43.462274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-19 05:20:43.462285 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:43.462295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:43.462306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-19 05:20:43.462333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-19 05:20:43.462375 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:43.462387 | orchestrator | 2026-02-19 05:20:43.462398 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-19 05:20:43.462409 | orchestrator | Thursday 19 February 2026 05:20:37 +0000 (0:00:01.334) 0:04:51.821 ***** 2026-02-19 05:20:43.462420 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:43.462431 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:43.462441 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:43.462473 | orchestrator | 2026-02-19 05:20:43.462485 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-19 05:20:43.462496 | orchestrator | Thursday 19 February 2026 05:20:37 +0000 (0:00:00.491) 0:04:52.312 ***** 2026-02-19 05:20:43.462506 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:43.462517 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:43.462528 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:43.462539 | orchestrator | 2026-02-19 05:20:43.462549 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-19 05:20:43.462560 | orchestrator | Thursday 19 February 2026 05:20:39 +0000 (0:00:01.428) 0:04:53.741 ***** 2026-02-19 05:20:43.462571 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:20:43.462582 | orchestrator | 2026-02-19 05:20:43.462593 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-19 05:20:43.462604 | orchestrator | Thursday 19 February 2026 05:20:40 +0000 (0:00:01.727) 0:04:55.469 ***** 2026-02-19 05:20:43.462637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-19 05:20:43.462653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 05:20:43.462666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:43.462678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:43.462697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 05:20:43.462724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-19 05:20:45.338413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-19 05:20:45.338507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 05:20:45.338522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 05:20:45.338548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:45.338578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:45.338588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:45.338615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:45.338626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 05:20:45.338635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 05:20:45.338645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:20:45.338666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-19 05:20:45.338676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:45.338693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.408233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 05:20:46.408469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:20:46.408530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:20:46.408571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-19 05:20:46.408610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-19 05:20:46.408623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.408636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.408647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.408672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.408684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 05:20:46.408696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 05:20:46.408707 | orchestrator | 2026-02-19 05:20:46.408721 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-19 05:20:46.408733 | orchestrator | Thursday 19 February 2026 05:20:45 +0000 (0:00:04.571) 0:05:00.040 ***** 2026-02-19 05:20:46.408755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-19 05:20:46.616146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 05:20:46.616252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.616309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.616324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 05:20:46.616339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:20:46.616421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-19 05:20:46.616436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.616456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.616474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 05:20:46.616487 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:46.616500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-19 05:20:46.616513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 05:20:46.616524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:46.616545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:47.058118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 05:20:47.058269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:20:47.058288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-19 05:20:47.058302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:47.058313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:47.058397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 05:20:47.058436 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:47.058449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-19 05:20:47.058467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-19 05:20:47.058479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:47.058489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:47.058499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-19 05:20:47.058519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:20:54.624883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-19 05:20:54.625010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:54.625032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:20:54.625050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-19 05:20:54.625066 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:54.625081 | orchestrator | 2026-02-19 05:20:54.625091 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-19 05:20:54.625102 | orchestrator | Thursday 19 February 2026 05:20:47 +0000 (0:00:01.544) 0:05:01.584 ***** 2026-02-19 05:20:54.625124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-19 05:20:54.625160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-19 05:20:54.625173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:54.625201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:54.625213 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:54.625251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-19 05:20:54.625261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-19 05:20:54.625277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:54.625287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:54.625296 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:54.625305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-19 05:20:54.625314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-19 05:20:54.625323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:54.625339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-19 05:20:54.625370 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:54.625379 | orchestrator | 2026-02-19 05:20:54.625389 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-19 05:20:54.625399 | orchestrator | Thursday 19 February 2026 05:20:48 +0000 (0:00:01.050) 0:05:02.635 ***** 2026-02-19 05:20:54.625409 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:54.625420 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:54.625430 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:54.625439 | orchestrator | 2026-02-19 05:20:54.625449 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-19 05:20:54.625459 | orchestrator | Thursday 19 February 2026 05:20:48 +0000 (0:00:00.465) 0:05:03.100 ***** 2026-02-19 05:20:54.625469 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:20:54.625479 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:20:54.625488 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:20:54.625498 | orchestrator | 2026-02-19 05:20:54.625508 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-19 05:20:54.625518 | orchestrator | Thursday 19 February 2026 05:20:50 +0000 (0:00:01.509) 0:05:04.609 ***** 2026-02-19 05:20:54.625528 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:20:54.625538 | orchestrator | 2026-02-19 05:20:54.625548 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-19 05:20:54.625558 | orchestrator | Thursday 19 February 2026 05:20:51 +0000 (0:00:01.797) 0:05:06.407 ***** 2026-02-19 05:20:54.625581 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:21:06.256664 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:21:06.256773 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:21:06.256813 | orchestrator | 2026-02-19 05:21:06.256826 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-19 05:21:06.256838 | orchestrator | Thursday 19 February 2026 05:20:54 +0000 (0:00:02.745) 0:05:09.152 ***** 2026-02-19 05:21:06.256849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:21:06.256860 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:06.256903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:21:06.256916 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:06.256926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:21:06.256943 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:06.256953 | orchestrator | 2026-02-19 05:21:06.256963 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-19 05:21:06.256973 | orchestrator | Thursday 19 February 2026 05:20:55 +0000 (0:00:00.594) 0:05:09.746 ***** 2026-02-19 05:21:06.256984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-19 05:21:06.256996 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:06.257006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-19 05:21:06.257016 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:06.257026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-19 05:21:06.257036 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:06.257046 | orchestrator | 2026-02-19 05:21:06.257056 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-19 05:21:06.257066 | orchestrator | Thursday 19 February 2026 05:20:55 +0000 (0:00:00.613) 0:05:10.360 ***** 2026-02-19 05:21:06.257075 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:06.257085 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:06.257095 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:06.257105 | orchestrator | 2026-02-19 05:21:06.257114 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-19 05:21:06.257124 | orchestrator | Thursday 19 February 2026 05:20:56 +0000 (0:00:00.445) 0:05:10.806 ***** 2026-02-19 05:21:06.257134 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:06.257144 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:06.257153 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:06.257163 | orchestrator | 2026-02-19 05:21:06.257174 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-19 05:21:06.257184 | orchestrator | Thursday 19 February 2026 05:20:57 +0000 (0:00:01.489) 0:05:12.295 ***** 2026-02-19 05:21:06.257193 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:21:06.257203 | orchestrator | 2026-02-19 05:21:06.257213 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-19 05:21:06.257223 | orchestrator | Thursday 19 February 2026 05:20:59 +0000 (0:00:01.403) 0:05:13.699 ***** 2026-02-19 05:21:06.257234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-19 05:21:06.257258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-19 05:21:06.948950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-19 05:21:06.949102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-19 05:21:06.949121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-19 05:21:06.949181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-19 05:21:06.949196 | orchestrator | 2026-02-19 05:21:06.949210 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-19 05:21:06.949223 | orchestrator | Thursday 19 February 2026 05:21:06 +0000 (0:00:07.072) 0:05:20.772 ***** 2026-02-19 05:21:06.949237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-19 05:21:06.949333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-19 05:21:06.949382 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:06.949413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-19 05:21:06.949449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-19 05:21:18.664296 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:18.664552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-19 05:21:18.664582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-19 05:21:18.664598 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:18.664613 | orchestrator | 2026-02-19 05:21:18.664661 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-19 05:21:18.664677 | orchestrator | Thursday 19 February 2026 05:21:06 +0000 (0:00:00.704) 0:05:21.476 ***** 2026-02-19 05:21:18.664694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-19 05:21:18.664731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-19 05:21:18.664750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:21:18.664768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:21:18.664784 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:18.664801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-19 05:21:18.664817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-19 05:21:18.664882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:21:18.664902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:21:18.664912 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:18.664922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-19 05:21:18.664931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-19 05:21:18.664940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:21:18.664950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-19 05:21:18.664959 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:18.664968 | orchestrator | 2026-02-19 05:21:18.664985 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-19 05:21:18.664995 | orchestrator | Thursday 19 February 2026 05:21:07 +0000 (0:00:00.989) 0:05:22.465 ***** 2026-02-19 05:21:18.665004 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:21:18.665013 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:21:18.665021 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:21:18.665030 | orchestrator | 2026-02-19 05:21:18.665038 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-19 05:21:18.665047 | orchestrator | Thursday 19 February 2026 05:21:09 +0000 (0:00:01.638) 0:05:24.104 ***** 2026-02-19 05:21:18.665057 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:21:18.665066 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:21:18.665074 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:21:18.665081 | orchestrator | 2026-02-19 05:21:18.665089 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-19 05:21:18.665097 | orchestrator | Thursday 19 February 2026 05:21:11 +0000 (0:00:02.126) 0:05:26.230 ***** 2026-02-19 05:21:18.665105 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:18.665112 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:18.665120 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:18.665128 | orchestrator | 2026-02-19 05:21:18.665136 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-19 05:21:18.665143 | orchestrator | Thursday 19 February 2026 05:21:12 +0000 (0:00:00.360) 0:05:26.590 ***** 2026-02-19 05:21:18.665151 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:18.665159 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:18.665167 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:18.665174 | orchestrator | 2026-02-19 05:21:18.665187 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-19 05:21:18.665196 | orchestrator | Thursday 19 February 2026 05:21:12 +0000 (0:00:00.343) 0:05:26.934 ***** 2026-02-19 05:21:18.665203 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:18.665211 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:18.665219 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:18.665226 | orchestrator | 2026-02-19 05:21:18.665234 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-19 05:21:18.665242 | orchestrator | Thursday 19 February 2026 05:21:13 +0000 (0:00:00.627) 0:05:27.562 ***** 2026-02-19 05:21:18.665250 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:18.665257 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:18.665265 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:18.665273 | orchestrator | 2026-02-19 05:21:18.665281 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-19 05:21:18.665288 | orchestrator | Thursday 19 February 2026 05:21:13 +0000 (0:00:00.340) 0:05:27.902 ***** 2026-02-19 05:21:18.665296 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:18.665304 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:21:18.665311 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:21:18.665320 | orchestrator | 2026-02-19 05:21:18.665328 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-19 05:21:18.665335 | orchestrator | Thursday 19 February 2026 05:21:13 +0000 (0:00:00.324) 0:05:28.227 ***** 2026-02-19 05:21:18.665343 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:21:18.665352 | orchestrator | 2026-02-19 05:21:18.665379 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-19 05:21:18.665387 | orchestrator | Thursday 19 February 2026 05:21:15 +0000 (0:00:01.833) 0:05:30.061 ***** 2026-02-19 05:21:18.665403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-19 05:21:21.138963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-19 05:21:21.139072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-19 05:21:21.139088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:21:21.139183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:21:21.139207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-19 05:21:21.139220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:21:21.139285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:21:21.139301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-19 05:21:21.139314 | orchestrator | 2026-02-19 05:21:21.139327 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-19 05:21:21.139340 | orchestrator | Thursday 19 February 2026 05:21:18 +0000 (0:00:03.125) 0:05:33.186 ***** 2026-02-19 05:21:21.139352 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:21:21.139469 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:21:21.139492 | orchestrator | } 2026-02-19 05:21:21.139512 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:21:21.139531 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:21:21.139550 | orchestrator | } 2026-02-19 05:21:21.139569 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:21:21.139588 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:21:21.139608 | orchestrator | } 2026-02-19 05:21:21.139627 | orchestrator | 2026-02-19 05:21:21.139646 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:21:21.139665 | orchestrator | Thursday 19 February 2026 05:21:19 +0000 (0:00:00.379) 0:05:33.566 ***** 2026-02-19 05:21:21.139683 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-19 05:21:21.139705 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-19 05:21:21.139755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-19 05:21:21.139778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:21:21.139815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:21:21.139837 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:21:21.139874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-19 05:23:03.458711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:23:03.458855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:23:03.458882 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:03.458903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-19 05:23:03.458940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-19 05:23:03.458959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-19 05:23:03.459009 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:03.459026 | orchestrator | 2026-02-19 05:23:03.459045 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-19 05:23:03.459063 | orchestrator | Thursday 19 February 2026 05:21:21 +0000 (0:00:02.094) 0:05:35.661 ***** 2026-02-19 05:23:03.459079 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:03.459096 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:03.459113 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:03.459129 | orchestrator | 2026-02-19 05:23:03.459145 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-19 05:23:03.459162 | orchestrator | Thursday 19 February 2026 05:21:22 +0000 (0:00:01.087) 0:05:36.748 ***** 2026-02-19 05:23:03.459179 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:03.459196 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:03.459212 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:03.459230 | orchestrator | 2026-02-19 05:23:03.459242 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-19 05:23:03.459254 | orchestrator | Thursday 19 February 2026 05:21:22 +0000 (0:00:00.372) 0:05:37.121 ***** 2026-02-19 05:23:03.459265 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:23:03.459277 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:23:03.459288 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:23:03.459299 | orchestrator | 2026-02-19 05:23:03.459310 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-19 05:23:03.459321 | orchestrator | Thursday 19 February 2026 05:21:28 +0000 (0:00:06.061) 0:05:43.183 ***** 2026-02-19 05:23:03.459353 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:23:03.459365 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:23:03.459376 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:23:03.459387 | orchestrator | 2026-02-19 05:23:03.459399 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-19 05:23:03.459448 | orchestrator | Thursday 19 February 2026 05:21:34 +0000 (0:00:06.035) 0:05:49.218 ***** 2026-02-19 05:23:03.459460 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:23:03.459471 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:23:03.459482 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:23:03.459493 | orchestrator | 2026-02-19 05:23:03.459505 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-19 05:23:03.459517 | orchestrator | Thursday 19 February 2026 05:21:41 +0000 (0:00:06.458) 0:05:55.677 ***** 2026-02-19 05:23:03.459528 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:23:03.459539 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:23:03.459551 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:23:03.459562 | orchestrator | 2026-02-19 05:23:03.459573 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-19 05:23:03.459583 | orchestrator | Thursday 19 February 2026 05:21:47 +0000 (0:00:06.058) 0:06:01.735 ***** 2026-02-19 05:23:03.459593 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:03.459602 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:03.459612 | orchestrator | 2026-02-19 05:23:03.459622 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-19 05:23:03.459631 | orchestrator | Thursday 19 February 2026 05:21:50 +0000 (0:00:03.673) 0:06:05.409 ***** 2026-02-19 05:23:03.459641 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:23:03.459651 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:23:03.459660 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:23:03.459670 | orchestrator | 2026-02-19 05:23:03.459680 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-19 05:23:03.459703 | orchestrator | Thursday 19 February 2026 05:22:02 +0000 (0:00:11.306) 0:06:16.716 ***** 2026-02-19 05:23:03.459711 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:03.459719 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:03.459726 | orchestrator | 2026-02-19 05:23:03.459734 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-19 05:23:03.459742 | orchestrator | Thursday 19 February 2026 05:22:06 +0000 (0:00:04.066) 0:06:20.782 ***** 2026-02-19 05:23:03.459750 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:23:03.459758 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:23:03.459765 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:23:03.459773 | orchestrator | 2026-02-19 05:23:03.459781 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-19 05:23:03.459789 | orchestrator | Thursday 19 February 2026 05:22:12 +0000 (0:00:05.896) 0:06:26.678 ***** 2026-02-19 05:23:03.459797 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:03.459805 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:03.459813 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:23:03.459821 | orchestrator | 2026-02-19 05:23:03.459828 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-19 05:23:03.459836 | orchestrator | Thursday 19 February 2026 05:22:18 +0000 (0:00:05.916) 0:06:32.595 ***** 2026-02-19 05:23:03.459850 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:03.459859 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:03.459866 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:23:03.459874 | orchestrator | 2026-02-19 05:23:03.459882 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-19 05:23:03.459890 | orchestrator | Thursday 19 February 2026 05:22:23 +0000 (0:00:05.878) 0:06:38.474 ***** 2026-02-19 05:23:03.459898 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:03.459905 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:03.459913 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:23:03.459921 | orchestrator | 2026-02-19 05:23:03.459929 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-19 05:23:03.459937 | orchestrator | Thursday 19 February 2026 05:22:29 +0000 (0:00:05.878) 0:06:44.353 ***** 2026-02-19 05:23:03.459945 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:03.459953 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:03.459960 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:23:03.459968 | orchestrator | 2026-02-19 05:23:03.459976 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-19 05:23:03.459984 | orchestrator | Thursday 19 February 2026 05:22:36 +0000 (0:00:06.633) 0:06:50.986 ***** 2026-02-19 05:23:03.459992 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:03.460000 | orchestrator | 2026-02-19 05:23:03.460008 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-19 05:23:03.460016 | orchestrator | Thursday 19 February 2026 05:22:40 +0000 (0:00:03.638) 0:06:54.624 ***** 2026-02-19 05:23:03.460023 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:03.460031 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:03.460039 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:23:03.460047 | orchestrator | 2026-02-19 05:23:03.460055 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-19 05:23:03.460063 | orchestrator | Thursday 19 February 2026 05:22:51 +0000 (0:00:11.677) 0:07:06.302 ***** 2026-02-19 05:23:03.460070 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:03.460078 | orchestrator | 2026-02-19 05:23:03.460086 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-19 05:23:03.460094 | orchestrator | Thursday 19 February 2026 05:22:56 +0000 (0:00:04.612) 0:07:10.914 ***** 2026-02-19 05:23:03.460102 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:03.460109 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:03.460117 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:23:03.460125 | orchestrator | 2026-02-19 05:23:03.460138 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-19 05:23:03.460146 | orchestrator | Thursday 19 February 2026 05:23:02 +0000 (0:00:06.072) 0:07:16.987 ***** 2026-02-19 05:23:03.460154 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:03.460162 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:03.460170 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:03.460177 | orchestrator | 2026-02-19 05:23:03.460185 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-19 05:23:03.460198 | orchestrator | Thursday 19 February 2026 05:23:03 +0000 (0:00:00.988) 0:07:17.975 ***** 2026-02-19 05:23:05.843894 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:05.844001 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:05.844020 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:05.844034 | orchestrator | 2026-02-19 05:23:05.844049 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:23:05.844067 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-19 05:23:05.844083 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-19 05:23:05.844099 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-19 05:23:05.844113 | orchestrator | 2026-02-19 05:23:05.844128 | orchestrator | 2026-02-19 05:23:05.844143 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:23:05.844157 | orchestrator | Thursday 19 February 2026 05:23:05 +0000 (0:00:01.613) 0:07:19.589 ***** 2026-02-19 05:23:05.844169 | orchestrator | =============================================================================== 2026-02-19 05:23:05.844184 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 11.68s 2026-02-19 05:23:05.844205 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.31s 2026-02-19 05:23:05.844220 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.07s 2026-02-19 05:23:05.844234 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 6.63s 2026-02-19 05:23:05.844248 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 6.46s 2026-02-19 05:23:05.844261 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.07s 2026-02-19 05:23:05.844275 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 6.06s 2026-02-19 05:23:05.844289 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 6.06s 2026-02-19 05:23:05.844302 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 6.04s 2026-02-19 05:23:05.844314 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 5.92s 2026-02-19 05:23:05.844327 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.90s 2026-02-19 05:23:05.844341 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 5.88s 2026-02-19 05:23:05.844354 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 5.88s 2026-02-19 05:23:05.844368 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.86s 2026-02-19 05:23:05.844403 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.61s 2026-02-19 05:23:05.844483 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.85s 2026-02-19 05:23:05.844500 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.64s 2026-02-19 05:23:05.844516 | orchestrator | loadbalancer : Wait for master proxysql to start ------------------------ 4.61s 2026-02-19 05:23:05.844528 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.57s 2026-02-19 05:23:05.844539 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.31s 2026-02-19 05:23:06.136999 | orchestrator | + osism apply -a upgrade opensearch 2026-02-19 05:23:08.151897 | orchestrator | 2026-02-19 05:23:08 | INFO  | Task 650d8b4c-7086-4ace-b891-2bccf18590b8 (opensearch) was prepared for execution. 2026-02-19 05:23:08.151993 | orchestrator | 2026-02-19 05:23:08 | INFO  | It takes a moment until task 650d8b4c-7086-4ace-b891-2bccf18590b8 (opensearch) has been started and output is visible here. 2026-02-19 05:23:27.118712 | orchestrator | 2026-02-19 05:23:27.118831 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 05:23:27.118857 | orchestrator | 2026-02-19 05:23:27.118878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 05:23:27.118898 | orchestrator | Thursday 19 February 2026 05:23:13 +0000 (0:00:01.406) 0:00:01.406 ***** 2026-02-19 05:23:27.118919 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:27.118933 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:27.118944 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:27.118955 | orchestrator | 2026-02-19 05:23:27.118966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 05:23:27.118977 | orchestrator | Thursday 19 February 2026 05:23:15 +0000 (0:00:01.868) 0:00:03.275 ***** 2026-02-19 05:23:27.118988 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-19 05:23:27.118999 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-19 05:23:27.119010 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-19 05:23:27.119021 | orchestrator | 2026-02-19 05:23:27.119031 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-19 05:23:27.119042 | orchestrator | 2026-02-19 05:23:27.119053 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-19 05:23:27.119063 | orchestrator | Thursday 19 February 2026 05:23:17 +0000 (0:00:01.688) 0:00:04.964 ***** 2026-02-19 05:23:27.119074 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:23:27.119085 | orchestrator | 2026-02-19 05:23:27.119096 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-19 05:23:27.119106 | orchestrator | Thursday 19 February 2026 05:23:19 +0000 (0:00:02.501) 0:00:07.465 ***** 2026-02-19 05:23:27.119117 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 05:23:27.119128 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 05:23:27.119138 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-19 05:23:27.119149 | orchestrator | 2026-02-19 05:23:27.119160 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-19 05:23:27.119171 | orchestrator | Thursday 19 February 2026 05:23:22 +0000 (0:00:03.018) 0:00:10.483 ***** 2026-02-19 05:23:27.119188 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:27.119240 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:27.119319 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:27.119344 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:27.119366 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:27.119394 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:27.119458 | orchestrator | 2026-02-19 05:23:27.119481 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-19 05:23:27.119500 | orchestrator | Thursday 19 February 2026 05:23:25 +0000 (0:00:02.487) 0:00:12.971 ***** 2026-02-19 05:23:27.119518 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:23:27.119538 | orchestrator | 2026-02-19 05:23:27.119560 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-19 05:23:32.499093 | orchestrator | Thursday 19 February 2026 05:23:27 +0000 (0:00:01.792) 0:00:14.764 ***** 2026-02-19 05:23:32.499207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:32.499224 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:32.499242 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:32.499294 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:32.499325 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:32.499336 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:32.499352 | orchestrator | 2026-02-19 05:23:32.499362 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-19 05:23:32.499372 | orchestrator | Thursday 19 February 2026 05:23:30 +0000 (0:00:03.600) 0:00:18.364 ***** 2026-02-19 05:23:32.499386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:23:32.499403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:23:34.268559 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:23:34.268665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:23:34.268685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:23:34.268719 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:34.268743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:23:34.268772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:23:34.268785 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:34.268796 | orchestrator | 2026-02-19 05:23:34.268807 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-19 05:23:34.268818 | orchestrator | Thursday 19 February 2026 05:23:32 +0000 (0:00:01.783) 0:00:20.148 ***** 2026-02-19 05:23:34.268828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:23:34.268839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:23:34.268857 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:23:34.268873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:23:34.268891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:23:37.992224 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:23:37.992334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:23:37.992377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:23:37.992391 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:23:37.992402 | orchestrator | 2026-02-19 05:23:37.992414 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-19 05:23:37.992516 | orchestrator | Thursday 19 February 2026 05:23:34 +0000 (0:00:01.765) 0:00:21.914 ***** 2026-02-19 05:23:37.992531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:37.992596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:37.992610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:37.992631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:37.992649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:37.992669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:51.417368 | orchestrator | 2026-02-19 05:23:51.417530 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-19 05:23:51.417549 | orchestrator | Thursday 19 February 2026 05:23:37 +0000 (0:00:03.722) 0:00:25.636 ***** 2026-02-19 05:23:51.417560 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:51.417573 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:51.417584 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:51.417595 | orchestrator | 2026-02-19 05:23:51.417606 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-19 05:23:51.417618 | orchestrator | Thursday 19 February 2026 05:23:41 +0000 (0:00:03.504) 0:00:29.141 ***** 2026-02-19 05:23:51.417628 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:23:51.417639 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:23:51.417650 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:23:51.417661 | orchestrator | 2026-02-19 05:23:51.417672 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-19 05:23:51.417683 | orchestrator | Thursday 19 February 2026 05:23:44 +0000 (0:00:03.032) 0:00:32.173 ***** 2026-02-19 05:23:51.417696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:51.417729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:51.417742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-19 05:23:51.417777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:51.417813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:51.417833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-19 05:23:51.417846 | orchestrator | 2026-02-19 05:23:51.417857 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-19 05:23:51.417881 | orchestrator | Thursday 19 February 2026 05:23:48 +0000 (0:00:03.512) 0:00:35.686 ***** 2026-02-19 05:23:51.417893 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:23:51.417906 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:23:51.417919 | orchestrator | } 2026-02-19 05:23:51.417932 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:23:51.417953 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:23:51.417966 | orchestrator | } 2026-02-19 05:23:51.417979 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:23:51.417991 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:23:51.418003 | orchestrator | } 2026-02-19 05:23:51.418075 | orchestrator | 2026-02-19 05:23:51.418088 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:23:51.418099 | orchestrator | Thursday 19 February 2026 05:23:49 +0000 (0:00:01.400) 0:00:37.087 ***** 2026-02-19 05:23:51.418120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:27:10.317702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:27:10.317826 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:27:10.317863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:27:10.317879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:27:10.317916 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:27:10.317945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-19 05:27:10.317959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-19 05:27:10.317971 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:27:10.317982 | orchestrator | 2026-02-19 05:27:10.317995 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-19 05:27:10.318007 | orchestrator | Thursday 19 February 2026 05:23:51 +0000 (0:00:01.976) 0:00:39.063 ***** 2026-02-19 05:27:10.318085 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:27:10.318099 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:27:10.318110 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:27:10.318121 | orchestrator | 2026-02-19 05:27:10.318134 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-19 05:27:10.318146 | orchestrator | Thursday 19 February 2026 05:23:52 +0000 (0:00:01.475) 0:00:40.539 ***** 2026-02-19 05:27:10.318159 | orchestrator | 2026-02-19 05:27:10.318171 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-19 05:27:10.318183 | orchestrator | Thursday 19 February 2026 05:23:53 +0000 (0:00:00.435) 0:00:40.974 ***** 2026-02-19 05:27:10.318208 | orchestrator | 2026-02-19 05:27:10.318220 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-19 05:27:10.318232 | orchestrator | Thursday 19 February 2026 05:23:53 +0000 (0:00:00.431) 0:00:41.405 ***** 2026-02-19 05:27:10.318244 | orchestrator | 2026-02-19 05:27:10.318258 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-19 05:27:10.318271 | orchestrator | Thursday 19 February 2026 05:23:54 +0000 (0:00:00.762) 0:00:42.168 ***** 2026-02-19 05:27:10.318283 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:27:10.318297 | orchestrator | 2026-02-19 05:27:10.318309 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-19 05:27:10.318322 | orchestrator | Thursday 19 February 2026 05:23:58 +0000 (0:00:03.746) 0:00:45.914 ***** 2026-02-19 05:27:10.318333 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:27:10.318344 | orchestrator | 2026-02-19 05:27:10.318355 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-19 05:27:10.318366 | orchestrator | Thursday 19 February 2026 05:24:07 +0000 (0:00:09.199) 0:00:55.114 ***** 2026-02-19 05:27:10.318377 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:27:10.318387 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:27:10.318398 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:27:10.318409 | orchestrator | 2026-02-19 05:27:10.318420 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-19 05:27:10.318431 | orchestrator | Thursday 19 February 2026 05:25:19 +0000 (0:01:12.476) 0:02:07.590 ***** 2026-02-19 05:27:10.318442 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:27:10.318452 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:27:10.318463 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:27:10.318474 | orchestrator | 2026-02-19 05:27:10.318485 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-19 05:27:10.318495 | orchestrator | Thursday 19 February 2026 05:27:00 +0000 (0:01:40.242) 0:03:47.832 ***** 2026-02-19 05:27:10.318507 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:27:10.318518 | orchestrator | 2026-02-19 05:27:10.318584 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-19 05:27:10.318597 | orchestrator | Thursday 19 February 2026 05:27:01 +0000 (0:00:01.669) 0:03:49.502 ***** 2026-02-19 05:27:10.318607 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:27:10.318618 | orchestrator | 2026-02-19 05:27:10.318629 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-19 05:27:10.318640 | orchestrator | Thursday 19 February 2026 05:27:05 +0000 (0:00:03.571) 0:03:53.074 ***** 2026-02-19 05:27:10.318650 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:27:10.318661 | orchestrator | 2026-02-19 05:27:10.318672 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-19 05:27:10.318683 | orchestrator | Thursday 19 February 2026 05:27:09 +0000 (0:00:03.643) 0:03:56.717 ***** 2026-02-19 05:27:10.318698 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:27:10.318716 | orchestrator | 2026-02-19 05:27:10.318728 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-19 05:27:10.318754 | orchestrator | Thursday 19 February 2026 05:27:10 +0000 (0:00:01.242) 0:03:57.960 ***** 2026-02-19 05:27:13.049850 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:27:13.049925 | orchestrator | 2026-02-19 05:27:13.049932 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:27:13.049938 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:27:13.049944 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 05:27:13.049948 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 05:27:13.049969 | orchestrator | 2026-02-19 05:27:13.049973 | orchestrator | 2026-02-19 05:27:13.049977 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:27:13.049981 | orchestrator | Thursday 19 February 2026 05:27:12 +0000 (0:00:02.389) 0:04:00.350 ***** 2026-02-19 05:27:13.049985 | orchestrator | =============================================================================== 2026-02-19 05:27:13.049989 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------ 100.24s 2026-02-19 05:27:13.049993 | orchestrator | opensearch : Restart opensearch container ------------------------------ 72.48s 2026-02-19 05:27:13.049997 | orchestrator | opensearch : Perform a flush -------------------------------------------- 9.20s 2026-02-19 05:27:13.050001 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.75s 2026-02-19 05:27:13.050004 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.72s 2026-02-19 05:27:13.050008 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.64s 2026-02-19 05:27:13.050012 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.60s 2026-02-19 05:27:13.050050 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.57s 2026-02-19 05:27:13.050064 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.51s 2026-02-19 05:27:13.050068 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.50s 2026-02-19 05:27:13.050072 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.03s 2026-02-19 05:27:13.050076 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 3.02s 2026-02-19 05:27:13.050079 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.50s 2026-02-19 05:27:13.050083 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.49s 2026-02-19 05:27:13.050087 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2026-02-19 05:27:13.050091 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.98s 2026-02-19 05:27:13.050094 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.87s 2026-02-19 05:27:13.050098 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.79s 2026-02-19 05:27:13.050102 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.78s 2026-02-19 05:27:13.050106 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.77s 2026-02-19 05:27:13.342914 | orchestrator | + osism apply -a upgrade memcached 2026-02-19 05:27:15.365221 | orchestrator | 2026-02-19 05:27:15 | INFO  | Task c36ac004-fd17-4604-be36-37a62b9fb804 (memcached) was prepared for execution. 2026-02-19 05:27:15.365378 | orchestrator | 2026-02-19 05:27:15 | INFO  | It takes a moment until task c36ac004-fd17-4604-be36-37a62b9fb804 (memcached) has been started and output is visible here. 2026-02-19 05:27:46.917399 | orchestrator | 2026-02-19 05:27:46.919366 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 05:27:46.919409 | orchestrator | 2026-02-19 05:27:46.919429 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 05:27:46.919449 | orchestrator | Thursday 19 February 2026 05:27:20 +0000 (0:00:01.475) 0:00:01.475 ***** 2026-02-19 05:27:46.919468 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:27:46.919488 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:27:46.919506 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:27:46.919525 | orchestrator | 2026-02-19 05:27:46.919578 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 05:27:46.919599 | orchestrator | Thursday 19 February 2026 05:27:22 +0000 (0:00:01.653) 0:00:03.129 ***** 2026-02-19 05:27:46.919618 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-19 05:27:46.919636 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-19 05:27:46.919656 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-19 05:27:46.919717 | orchestrator | 2026-02-19 05:27:46.919737 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-19 05:27:46.919756 | orchestrator | 2026-02-19 05:27:46.919774 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-19 05:27:46.919792 | orchestrator | Thursday 19 February 2026 05:27:24 +0000 (0:00:01.792) 0:00:04.921 ***** 2026-02-19 05:27:46.919810 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:27:46.919829 | orchestrator | 2026-02-19 05:27:46.919846 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-19 05:27:46.919865 | orchestrator | Thursday 19 February 2026 05:27:26 +0000 (0:00:02.202) 0:00:07.124 ***** 2026-02-19 05:27:46.919884 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-19 05:27:46.919904 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-19 05:27:46.919922 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-19 05:27:46.919940 | orchestrator | 2026-02-19 05:27:46.919953 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-19 05:27:46.919963 | orchestrator | Thursday 19 February 2026 05:27:28 +0000 (0:00:01.944) 0:00:09.068 ***** 2026-02-19 05:27:46.919974 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-19 05:27:46.919986 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-19 05:27:46.919997 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-19 05:27:46.920007 | orchestrator | 2026-02-19 05:27:46.920018 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-19 05:27:46.920029 | orchestrator | Thursday 19 February 2026 05:27:30 +0000 (0:00:02.601) 0:00:11.670 ***** 2026-02-19 05:27:46.920045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 05:27:46.920077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 05:27:46.920165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-19 05:27:46.920212 | orchestrator | 2026-02-19 05:27:46.920234 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-19 05:27:46.920254 | orchestrator | Thursday 19 February 2026 05:27:33 +0000 (0:00:02.185) 0:00:13.856 ***** 2026-02-19 05:27:46.920269 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:27:46.920281 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:27:46.920293 | orchestrator | } 2026-02-19 05:27:46.920304 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:27:46.920315 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:27:46.920326 | orchestrator | } 2026-02-19 05:27:46.920337 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:27:46.920347 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:27:46.920358 | orchestrator | } 2026-02-19 05:27:46.920369 | orchestrator | 2026-02-19 05:27:46.920380 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:27:46.920391 | orchestrator | Thursday 19 February 2026 05:27:34 +0000 (0:00:01.264) 0:00:15.120 ***** 2026-02-19 05:27:46.920403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 05:27:46.920415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 05:27:46.920426 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:27:46.920437 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:27:46.920457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-19 05:27:46.920468 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:27:46.920479 | orchestrator | 2026-02-19 05:27:46.920490 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-19 05:27:46.920502 | orchestrator | Thursday 19 February 2026 05:27:36 +0000 (0:00:01.813) 0:00:16.934 ***** 2026-02-19 05:27:46.920512 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:27:46.920531 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:27:46.920572 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:27:46.920584 | orchestrator | 2026-02-19 05:27:46.920595 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:27:46.920608 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:27:46.920620 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:27:46.920631 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:27:46.920642 | orchestrator | 2026-02-19 05:27:46.920653 | orchestrator | 2026-02-19 05:27:46.920664 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:27:46.920685 | orchestrator | Thursday 19 February 2026 05:27:46 +0000 (0:00:10.688) 0:00:27.623 ***** 2026-02-19 05:27:47.189391 | orchestrator | =============================================================================== 2026-02-19 05:27:47.189465 | orchestrator | memcached : Restart memcached container -------------------------------- 10.69s 2026-02-19 05:27:47.189471 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.60s 2026-02-19 05:27:47.189475 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.20s 2026-02-19 05:27:47.189479 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.18s 2026-02-19 05:27:47.189484 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.94s 2026-02-19 05:27:47.189488 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.81s 2026-02-19 05:27:47.189492 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.79s 2026-02-19 05:27:47.189496 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.65s 2026-02-19 05:27:47.189500 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.27s 2026-02-19 05:27:47.457866 | orchestrator | + osism apply -a upgrade redis 2026-02-19 05:27:49.502425 | orchestrator | 2026-02-19 05:27:49 | INFO  | Task a249e4f6-fd33-430a-a940-5b2570728f4f (redis) was prepared for execution. 2026-02-19 05:27:49.502525 | orchestrator | 2026-02-19 05:27:49 | INFO  | It takes a moment until task a249e4f6-fd33-430a-a940-5b2570728f4f (redis) has been started and output is visible here. 2026-02-19 05:28:07.657790 | orchestrator | 2026-02-19 05:28:07.657931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 05:28:07.657950 | orchestrator | 2026-02-19 05:28:07.657963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 05:28:07.657975 | orchestrator | Thursday 19 February 2026 05:27:55 +0000 (0:00:01.513) 0:00:01.513 ***** 2026-02-19 05:28:07.657986 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:28:07.657998 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:28:07.658009 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:28:07.658083 | orchestrator | 2026-02-19 05:28:07.658094 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 05:28:07.658106 | orchestrator | Thursday 19 February 2026 05:27:56 +0000 (0:00:01.698) 0:00:03.211 ***** 2026-02-19 05:28:07.658118 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-19 05:28:07.658129 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-19 05:28:07.658140 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-19 05:28:07.658151 | orchestrator | 2026-02-19 05:28:07.658162 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-19 05:28:07.658173 | orchestrator | 2026-02-19 05:28:07.658184 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-19 05:28:07.658195 | orchestrator | Thursday 19 February 2026 05:27:58 +0000 (0:00:01.608) 0:00:04.819 ***** 2026-02-19 05:28:07.658233 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:28:07.658245 | orchestrator | 2026-02-19 05:28:07.658257 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-19 05:28:07.658268 | orchestrator | Thursday 19 February 2026 05:28:01 +0000 (0:00:02.890) 0:00:07.710 ***** 2026-02-19 05:28:07.658297 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658312 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658327 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658341 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658425 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658446 | orchestrator | 2026-02-19 05:28:07.658460 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-19 05:28:07.658472 | orchestrator | Thursday 19 February 2026 05:28:03 +0000 (0:00:02.315) 0:00:10.025 ***** 2026-02-19 05:28:07.658491 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658504 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658518 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658531 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:07.658588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.675444 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.675684 | orchestrator | 2026-02-19 05:28:14.675716 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-19 05:28:14.675735 | orchestrator | Thursday 19 February 2026 05:28:07 +0000 (0:00:04.026) 0:00:14.052 ***** 2026-02-19 05:28:14.675755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.675774 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.675891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.675925 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.675943 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.675986 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.676019 | orchestrator | 2026-02-19 05:28:14.676038 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-19 05:28:14.676056 | orchestrator | Thursday 19 February 2026 05:28:11 +0000 (0:00:03.925) 0:00:17.977 ***** 2026-02-19 05:28:14.676074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.676100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.676119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.676137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.676158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:14.676201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-19 05:28:42.231956 | orchestrator | 2026-02-19 05:28:42.232121 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-19 05:28:42.232150 | orchestrator | Thursday 19 February 2026 05:28:14 +0000 (0:00:03.103) 0:00:21.081 ***** 2026-02-19 05:28:42.232171 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:28:42.232311 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:28:42.232339 | orchestrator | } 2026-02-19 05:28:42.232351 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:28:42.232361 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:28:42.232370 | orchestrator | } 2026-02-19 05:28:42.232380 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:28:42.232390 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:28:42.232400 | orchestrator | } 2026-02-19 05:28:42.232410 | orchestrator | 2026-02-19 05:28:42.232420 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:28:42.232430 | orchestrator | Thursday 19 February 2026 05:28:16 +0000 (0:00:01.634) 0:00:22.715 ***** 2026-02-19 05:28:42.232460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-19 05:28:42.232475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-19 05:28:42.232486 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:28:42.232500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-19 05:28:42.232518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-19 05:28:42.232581 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:28:42.232600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-19 05:28:42.232661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-19 05:28:42.232682 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:28:42.232699 | orchestrator | 2026-02-19 05:28:42.232712 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-19 05:28:42.232722 | orchestrator | Thursday 19 February 2026 05:28:18 +0000 (0:00:01.859) 0:00:24.574 ***** 2026-02-19 05:28:42.232732 | orchestrator | 2026-02-19 05:28:42.232742 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-19 05:28:42.232751 | orchestrator | Thursday 19 February 2026 05:28:18 +0000 (0:00:00.446) 0:00:25.020 ***** 2026-02-19 05:28:42.232761 | orchestrator | 2026-02-19 05:28:42.232771 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-19 05:28:42.232788 | orchestrator | Thursday 19 February 2026 05:28:19 +0000 (0:00:00.444) 0:00:25.465 ***** 2026-02-19 05:28:42.232798 | orchestrator | 2026-02-19 05:28:42.232807 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-19 05:28:42.232817 | orchestrator | Thursday 19 February 2026 05:28:19 +0000 (0:00:00.784) 0:00:26.249 ***** 2026-02-19 05:28:42.232827 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:28:42.232836 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:28:42.232846 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:28:42.232856 | orchestrator | 2026-02-19 05:28:42.232865 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-19 05:28:42.232875 | orchestrator | Thursday 19 February 2026 05:28:30 +0000 (0:00:10.899) 0:00:37.149 ***** 2026-02-19 05:28:42.232885 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:28:42.232894 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:28:42.232904 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:28:42.232913 | orchestrator | 2026-02-19 05:28:42.232923 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:28:42.232934 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:28:42.232954 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:28:42.232964 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:28:42.232974 | orchestrator | 2026-02-19 05:28:42.232983 | orchestrator | 2026-02-19 05:28:42.232993 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:28:42.233003 | orchestrator | Thursday 19 February 2026 05:28:41 +0000 (0:00:11.130) 0:00:48.280 ***** 2026-02-19 05:28:42.233012 | orchestrator | =============================================================================== 2026-02-19 05:28:42.233022 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.13s 2026-02-19 05:28:42.233031 | orchestrator | redis : Restart redis container ---------------------------------------- 10.90s 2026-02-19 05:28:42.233041 | orchestrator | redis : Copying over default config.json files -------------------------- 4.03s 2026-02-19 05:28:42.233050 | orchestrator | redis : Copying over redis config files --------------------------------- 3.93s 2026-02-19 05:28:42.233060 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.10s 2026-02-19 05:28:42.233069 | orchestrator | redis : include_tasks --------------------------------------------------- 2.89s 2026-02-19 05:28:42.233079 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.32s 2026-02-19 05:28:42.233088 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.86s 2026-02-19 05:28:42.233098 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.70s 2026-02-19 05:28:42.233107 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.68s 2026-02-19 05:28:42.233117 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.63s 2026-02-19 05:28:42.233126 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.61s 2026-02-19 05:28:42.522783 | orchestrator | + osism apply -a upgrade mariadb 2026-02-19 05:28:44.560693 | orchestrator | 2026-02-19 05:28:44 | INFO  | Task 3bb732d2-ec44-4c42-af5e-e5b859124f5d (mariadb) was prepared for execution. 2026-02-19 05:28:44.560802 | orchestrator | 2026-02-19 05:28:44 | INFO  | It takes a moment until task 3bb732d2-ec44-4c42-af5e-e5b859124f5d (mariadb) has been started and output is visible here. 2026-02-19 05:29:08.209365 | orchestrator | 2026-02-19 05:29:08.209461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 05:29:08.209471 | orchestrator | 2026-02-19 05:29:08.209478 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 05:29:08.209485 | orchestrator | Thursday 19 February 2026 05:28:50 +0000 (0:00:01.400) 0:00:01.400 ***** 2026-02-19 05:29:08.209492 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:29:08.209500 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:29:08.209506 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:29:08.209512 | orchestrator | 2026-02-19 05:29:08.209519 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 05:29:08.209525 | orchestrator | Thursday 19 February 2026 05:28:51 +0000 (0:00:01.781) 0:00:03.182 ***** 2026-02-19 05:29:08.209532 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-19 05:29:08.209539 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-19 05:29:08.209545 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-19 05:29:08.209551 | orchestrator | 2026-02-19 05:29:08.209557 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-19 05:29:08.209564 | orchestrator | 2026-02-19 05:29:08.209601 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-19 05:29:08.209608 | orchestrator | Thursday 19 February 2026 05:28:53 +0000 (0:00:01.838) 0:00:05.021 ***** 2026-02-19 05:29:08.209615 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:29:08.209622 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 05:29:08.209652 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 05:29:08.209658 | orchestrator | 2026-02-19 05:29:08.209664 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 05:29:08.209671 | orchestrator | Thursday 19 February 2026 05:28:55 +0000 (0:00:01.188) 0:00:06.210 ***** 2026-02-19 05:29:08.209691 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:29:08.209698 | orchestrator | 2026-02-19 05:29:08.209704 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-19 05:29:08.209711 | orchestrator | Thursday 19 February 2026 05:28:56 +0000 (0:00:01.782) 0:00:07.992 ***** 2026-02-19 05:29:08.209723 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:08.209750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:08.209768 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:08.209775 | orchestrator | 2026-02-19 05:29:08.209782 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-19 05:29:08.209788 | orchestrator | Thursday 19 February 2026 05:29:00 +0000 (0:00:03.404) 0:00:11.397 ***** 2026-02-19 05:29:08.209795 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:08.209802 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:08.209809 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:29:08.209815 | orchestrator | 2026-02-19 05:29:08.209821 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-19 05:29:08.209827 | orchestrator | Thursday 19 February 2026 05:29:01 +0000 (0:00:01.581) 0:00:12.978 ***** 2026-02-19 05:29:08.209833 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:08.209839 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:08.209845 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:29:08.209851 | orchestrator | 2026-02-19 05:29:08.209857 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-19 05:29:08.209863 | orchestrator | Thursday 19 February 2026 05:29:03 +0000 (0:00:02.168) 0:00:15.147 ***** 2026-02-19 05:29:08.209923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:20.598820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:20.598965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:20.599026 | orchestrator | 2026-02-19 05:29:20.599050 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-19 05:29:20.599071 | orchestrator | Thursday 19 February 2026 05:29:08 +0000 (0:00:04.233) 0:00:19.381 ***** 2026-02-19 05:29:20.599087 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:20.599105 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:20.599122 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:29:20.599139 | orchestrator | 2026-02-19 05:29:20.599156 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-19 05:29:20.599215 | orchestrator | Thursday 19 February 2026 05:29:10 +0000 (0:00:02.147) 0:00:21.528 ***** 2026-02-19 05:29:20.599235 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:29:20.599251 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:29:20.599267 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:29:20.599284 | orchestrator | 2026-02-19 05:29:20.599300 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 05:29:20.599317 | orchestrator | Thursday 19 February 2026 05:29:15 +0000 (0:00:04.870) 0:00:26.399 ***** 2026-02-19 05:29:20.599333 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:29:20.599350 | orchestrator | 2026-02-19 05:29:20.599365 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-19 05:29:20.599381 | orchestrator | Thursday 19 February 2026 05:29:17 +0000 (0:00:01.830) 0:00:28.229 ***** 2026-02-19 05:29:20.599399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:20.599435 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:20.599476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:28.058985 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:28.059132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:28.059157 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:28.059193 | orchestrator | 2026-02-19 05:29:28.059207 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-19 05:29:28.059219 | orchestrator | Thursday 19 February 2026 05:29:20 +0000 (0:00:03.539) 0:00:31.768 ***** 2026-02-19 05:29:28.059240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:28.059252 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:28.059286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:28.059300 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:28.059319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:28.059331 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:28.059342 | orchestrator | 2026-02-19 05:29:28.059353 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-19 05:29:28.059364 | orchestrator | Thursday 19 February 2026 05:29:24 +0000 (0:00:03.503) 0:00:35.272 ***** 2026-02-19 05:29:28.059390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:32.183838 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:32.183926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:32.183940 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:32.183963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:32.183972 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:32.183980 | orchestrator | 2026-02-19 05:29:32.183988 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-19 05:29:32.184014 | orchestrator | Thursday 19 February 2026 05:29:28 +0000 (0:00:03.958) 0:00:39.230 ***** 2026-02-19 05:29:32.184037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:32.184051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:32.184066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-19 05:29:47.183059 | orchestrator | 2026-02-19 05:29:47.183158 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-19 05:29:47.183168 | orchestrator | Thursday 19 February 2026 05:29:32 +0000 (0:00:04.131) 0:00:43.362 ***** 2026-02-19 05:29:47.183176 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:29:47.183184 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:29:47.183191 | orchestrator | } 2026-02-19 05:29:47.183198 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:29:47.183207 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:29:47.183215 | orchestrator | } 2026-02-19 05:29:47.183223 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:29:47.183230 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:29:47.183238 | orchestrator | } 2026-02-19 05:29:47.183245 | orchestrator | 2026-02-19 05:29:47.183252 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:29:47.183258 | orchestrator | Thursday 19 February 2026 05:29:33 +0000 (0:00:01.369) 0:00:44.732 ***** 2026-02-19 05:29:47.183283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:47.183315 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:47.183338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:47.183347 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:47.183357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:29:47.183368 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:47.183374 | orchestrator | 2026-02-19 05:29:47.183381 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-19 05:29:47.183387 | orchestrator | Thursday 19 February 2026 05:29:37 +0000 (0:00:03.956) 0:00:48.688 ***** 2026-02-19 05:29:47.183395 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:47.183401 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:47.183406 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:47.183412 | orchestrator | 2026-02-19 05:29:47.183418 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-19 05:29:47.183424 | orchestrator | Thursday 19 February 2026 05:29:38 +0000 (0:00:01.389) 0:00:50.078 ***** 2026-02-19 05:29:47.183430 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:47.183435 | orchestrator | 2026-02-19 05:29:47.183441 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-19 05:29:47.183448 | orchestrator | Thursday 19 February 2026 05:29:40 +0000 (0:00:01.110) 0:00:51.189 ***** 2026-02-19 05:29:47.183454 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:47.183461 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:47.183467 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:47.183473 | orchestrator | 2026-02-19 05:29:47.183484 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-19 05:29:47.183490 | orchestrator | Thursday 19 February 2026 05:29:41 +0000 (0:00:01.409) 0:00:52.598 ***** 2026-02-19 05:29:47.183495 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:47.183501 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:47.183508 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:47.183516 | orchestrator | 2026-02-19 05:29:47.183522 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-19 05:29:47.183529 | orchestrator | Thursday 19 February 2026 05:29:42 +0000 (0:00:01.532) 0:00:54.131 ***** 2026-02-19 05:29:47.183536 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:47.183544 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:47.183550 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:47.183555 | orchestrator | 2026-02-19 05:29:47.183561 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-19 05:29:47.183566 | orchestrator | Thursday 19 February 2026 05:29:44 +0000 (0:00:01.395) 0:00:55.526 ***** 2026-02-19 05:29:47.183572 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:47.183577 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:47.183583 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:47.183635 | orchestrator | 2026-02-19 05:29:47.183643 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-19 05:29:47.183649 | orchestrator | Thursday 19 February 2026 05:29:45 +0000 (0:00:01.416) 0:00:56.942 ***** 2026-02-19 05:29:47.183656 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:29:47.183663 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:29:47.183671 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:29:47.183678 | orchestrator | 2026-02-19 05:29:47.183691 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-19 05:30:04.409997 | orchestrator | Thursday 19 February 2026 05:29:47 +0000 (0:00:01.408) 0:00:58.351 ***** 2026-02-19 05:30:04.410153 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410169 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410179 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410188 | orchestrator | 2026-02-19 05:30:04.410199 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-19 05:30:04.410208 | orchestrator | Thursday 19 February 2026 05:29:48 +0000 (0:00:01.541) 0:00:59.893 ***** 2026-02-19 05:30:04.410218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 05:30:04.410250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 05:30:04.410259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 05:30:04.410268 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410277 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 05:30:04.410286 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 05:30:04.410294 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 05:30:04.410303 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410312 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 05:30:04.410320 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 05:30:04.410342 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 05:30:04.410352 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410361 | orchestrator | 2026-02-19 05:30:04.410370 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-19 05:30:04.410379 | orchestrator | Thursday 19 February 2026 05:29:50 +0000 (0:00:01.432) 0:01:01.325 ***** 2026-02-19 05:30:04.410387 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410396 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410405 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410414 | orchestrator | 2026-02-19 05:30:04.410423 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-19 05:30:04.410432 | orchestrator | Thursday 19 February 2026 05:29:51 +0000 (0:00:01.360) 0:01:02.686 ***** 2026-02-19 05:30:04.410441 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410449 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410458 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410466 | orchestrator | 2026-02-19 05:30:04.410475 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-19 05:30:04.410484 | orchestrator | Thursday 19 February 2026 05:29:52 +0000 (0:00:01.311) 0:01:03.997 ***** 2026-02-19 05:30:04.410492 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410501 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410510 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410518 | orchestrator | 2026-02-19 05:30:04.410527 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-19 05:30:04.410536 | orchestrator | Thursday 19 February 2026 05:29:54 +0000 (0:00:01.331) 0:01:05.329 ***** 2026-02-19 05:30:04.410546 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410556 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410566 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410575 | orchestrator | 2026-02-19 05:30:04.410585 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-19 05:30:04.410658 | orchestrator | Thursday 19 February 2026 05:29:55 +0000 (0:00:01.293) 0:01:06.623 ***** 2026-02-19 05:30:04.410676 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410691 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410707 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410722 | orchestrator | 2026-02-19 05:30:04.410737 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-19 05:30:04.410752 | orchestrator | Thursday 19 February 2026 05:29:56 +0000 (0:00:01.326) 0:01:07.949 ***** 2026-02-19 05:30:04.410766 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410782 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410797 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410813 | orchestrator | 2026-02-19 05:30:04.410829 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-19 05:30:04.410846 | orchestrator | Thursday 19 February 2026 05:29:58 +0000 (0:00:01.509) 0:01:09.459 ***** 2026-02-19 05:30:04.410863 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410878 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410895 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410923 | orchestrator | 2026-02-19 05:30:04.410937 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-19 05:30:04.410946 | orchestrator | Thursday 19 February 2026 05:29:59 +0000 (0:00:01.419) 0:01:10.878 ***** 2026-02-19 05:30:04.410954 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.410963 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.410972 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:04.410980 | orchestrator | 2026-02-19 05:30:04.410989 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-19 05:30:04.410998 | orchestrator | Thursday 19 February 2026 05:30:01 +0000 (0:00:01.341) 0:01:12.220 ***** 2026-02-19 05:30:04.411039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:30:04.411053 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:04.411063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:30:04.411080 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:04.411097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:30:20.921004 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:20.921122 | orchestrator | 2026-02-19 05:30:20.921139 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-19 05:30:20.921174 | orchestrator | Thursday 19 February 2026 05:30:04 +0000 (0:00:03.357) 0:01:15.577 ***** 2026-02-19 05:30:20.921195 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:20.921214 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:20.921231 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:20.921250 | orchestrator | 2026-02-19 05:30:20.921267 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-19 05:30:20.921288 | orchestrator | Thursday 19 February 2026 05:30:05 +0000 (0:00:01.553) 0:01:17.131 ***** 2026-02-19 05:30:20.921313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:30:20.921361 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:20.921395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:30:20.921408 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:20.921428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-19 05:30:20.921447 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:20.921458 | orchestrator | 2026-02-19 05:30:20.921469 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-19 05:30:20.921480 | orchestrator | Thursday 19 February 2026 05:30:09 +0000 (0:00:03.272) 0:01:20.404 ***** 2026-02-19 05:30:20.921491 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:20.921502 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:20.921515 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:20.921526 | orchestrator | 2026-02-19 05:30:20.921538 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-19 05:30:20.921551 | orchestrator | Thursday 19 February 2026 05:30:10 +0000 (0:00:01.704) 0:01:22.109 ***** 2026-02-19 05:30:20.921563 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:20.921575 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:20.921587 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:20.921599 | orchestrator | 2026-02-19 05:30:20.921646 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-19 05:30:20.921659 | orchestrator | Thursday 19 February 2026 05:30:12 +0000 (0:00:01.463) 0:01:23.572 ***** 2026-02-19 05:30:20.921671 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:20.921683 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:20.921696 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:20.921708 | orchestrator | 2026-02-19 05:30:20.921721 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-19 05:30:20.921733 | orchestrator | Thursday 19 February 2026 05:30:13 +0000 (0:00:01.369) 0:01:24.942 ***** 2026-02-19 05:30:20.921746 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:20.921759 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:20.921771 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:20.921783 | orchestrator | 2026-02-19 05:30:20.921796 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-19 05:30:20.921808 | orchestrator | Thursday 19 February 2026 05:30:15 +0000 (0:00:01.733) 0:01:26.675 ***** 2026-02-19 05:30:20.921819 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:30:20.921832 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:30:20.921844 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:30:20.921857 | orchestrator | 2026-02-19 05:30:20.921867 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-19 05:30:20.921878 | orchestrator | Thursday 19 February 2026 05:30:17 +0000 (0:00:01.943) 0:01:28.619 ***** 2026-02-19 05:30:20.921889 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:30:20.921900 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:30:20.921911 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:30:20.921922 | orchestrator | 2026-02-19 05:30:20.921933 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-19 05:30:20.921944 | orchestrator | Thursday 19 February 2026 05:30:19 +0000 (0:00:01.911) 0:01:30.531 ***** 2026-02-19 05:30:20.921954 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:30:20.921965 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:30:20.921975 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:30:20.921986 | orchestrator | 2026-02-19 05:30:20.921997 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-19 05:30:20.922007 | orchestrator | Thursday 19 February 2026 05:30:20 +0000 (0:00:01.368) 0:01:31.900 ***** 2026-02-19 05:30:20.922096 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.805625 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.805803 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.805820 | orchestrator | 2026-02-19 05:32:57.805851 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-19 05:32:57.805864 | orchestrator | Thursday 19 February 2026 05:30:22 +0000 (0:00:01.328) 0:01:33.228 ***** 2026-02-19 05:32:57.805876 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.805888 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.805898 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.805909 | orchestrator | 2026-02-19 05:32:57.805920 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-19 05:32:57.805931 | orchestrator | Thursday 19 February 2026 05:30:24 +0000 (0:00:02.135) 0:01:35.364 ***** 2026-02-19 05:32:57.805942 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.805953 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.805963 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.805974 | orchestrator | 2026-02-19 05:32:57.805985 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-19 05:32:57.805996 | orchestrator | Thursday 19 February 2026 05:30:25 +0000 (0:00:01.443) 0:01:36.808 ***** 2026-02-19 05:32:57.806007 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:32:57.806073 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.806087 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.806098 | orchestrator | 2026-02-19 05:32:57.806109 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-19 05:32:57.806120 | orchestrator | Thursday 19 February 2026 05:30:27 +0000 (0:00:01.580) 0:01:38.389 ***** 2026-02-19 05:32:57.806133 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.806146 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.806158 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.806170 | orchestrator | 2026-02-19 05:32:57.806182 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-19 05:32:57.806194 | orchestrator | Thursday 19 February 2026 05:30:30 +0000 (0:00:03.641) 0:01:42.030 ***** 2026-02-19 05:32:57.806207 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.806219 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.806231 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.806244 | orchestrator | 2026-02-19 05:32:57.806256 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-19 05:32:57.806269 | orchestrator | Thursday 19 February 2026 05:30:32 +0000 (0:00:01.354) 0:01:43.385 ***** 2026-02-19 05:32:57.806280 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.806292 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.806304 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.806316 | orchestrator | 2026-02-19 05:32:57.806329 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-19 05:32:57.806342 | orchestrator | Thursday 19 February 2026 05:30:33 +0000 (0:00:01.341) 0:01:44.727 ***** 2026-02-19 05:32:57.806354 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:32:57.806367 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.806379 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.806392 | orchestrator | 2026-02-19 05:32:57.806405 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 05:32:57.806421 | orchestrator | Thursday 19 February 2026 05:30:35 +0000 (0:00:01.738) 0:01:46.466 ***** 2026-02-19 05:32:57.806441 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:32:57.806460 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.806478 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.806489 | orchestrator | 2026-02-19 05:32:57.806500 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-19 05:32:57.806511 | orchestrator | Thursday 19 February 2026 05:30:36 +0000 (0:00:01.522) 0:01:47.988 ***** 2026-02-19 05:32:57.806522 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:32:57.806565 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.806577 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.806588 | orchestrator | 2026-02-19 05:32:57.806599 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-19 05:32:57.806610 | orchestrator | Thursday 19 February 2026 05:30:38 +0000 (0:00:01.467) 0:01:49.455 ***** 2026-02-19 05:32:57.806620 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:32:57.806631 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:32:57.806642 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:32:57.806653 | orchestrator | 2026-02-19 05:32:57.806683 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-19 05:32:57.806694 | orchestrator | Thursday 19 February 2026 05:30:39 +0000 (0:00:01.562) 0:01:51.018 ***** 2026-02-19 05:32:57.806705 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:32:57.806716 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.806726 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.806737 | orchestrator | 2026-02-19 05:32:57.806747 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-19 05:32:57.806758 | orchestrator | 2026-02-19 05:32:57.806769 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-19 05:32:57.806779 | orchestrator | Thursday 19 February 2026 05:30:41 +0000 (0:00:01.731) 0:01:52.749 ***** 2026-02-19 05:32:57.806790 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:32:57.806801 | orchestrator | 2026-02-19 05:32:57.806811 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-19 05:32:57.806822 | orchestrator | Thursday 19 February 2026 05:31:07 +0000 (0:00:26.109) 0:02:18.858 ***** 2026-02-19 05:32:57.806833 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service port liveness (10 retries left). 2026-02-19 05:32:57.806845 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.806856 | orchestrator | 2026-02-19 05:32:57.806866 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-19 05:32:57.806877 | orchestrator | Thursday 19 February 2026 05:31:15 +0000 (0:00:08.190) 0:02:27.050 ***** 2026-02-19 05:32:57.806888 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.806898 | orchestrator | 2026-02-19 05:32:57.806910 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-19 05:32:57.806929 | orchestrator | 2026-02-19 05:32:57.806942 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-19 05:32:57.806953 | orchestrator | Thursday 19 February 2026 05:31:19 +0000 (0:00:03.177) 0:02:30.227 ***** 2026-02-19 05:32:57.806964 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:32:57.806975 | orchestrator | 2026-02-19 05:32:57.807006 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-19 05:32:57.807025 | orchestrator | Thursday 19 February 2026 05:31:42 +0000 (0:00:23.726) 0:02:53.954 ***** 2026-02-19 05:32:57.807036 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.807047 | orchestrator | 2026-02-19 05:32:57.807058 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-19 05:32:57.807069 | orchestrator | Thursday 19 February 2026 05:31:48 +0000 (0:00:05.548) 0:02:59.503 ***** 2026-02-19 05:32:57.807079 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.807090 | orchestrator | 2026-02-19 05:32:57.807101 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-19 05:32:57.807112 | orchestrator | 2026-02-19 05:32:57.807123 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-19 05:32:57.807133 | orchestrator | Thursday 19 February 2026 05:31:51 +0000 (0:00:02.984) 0:03:02.487 ***** 2026-02-19 05:32:57.807144 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:32:57.807155 | orchestrator | 2026-02-19 05:32:57.807166 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-19 05:32:57.807176 | orchestrator | Thursday 19 February 2026 05:32:16 +0000 (0:00:24.734) 0:03:27.222 ***** 2026-02-19 05:32:57.807187 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-19 05:32:57.807206 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.807217 | orchestrator | 2026-02-19 05:32:57.807228 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-19 05:32:57.807239 | orchestrator | Thursday 19 February 2026 05:32:24 +0000 (0:00:08.063) 0:03:35.285 ***** 2026-02-19 05:32:57.807250 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-19 05:32:57.807261 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-19 05:32:57.807271 | orchestrator | mariadb_bootstrap_restart 2026-02-19 05:32:57.807282 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.807293 | orchestrator | 2026-02-19 05:32:57.807304 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-19 05:32:57.807314 | orchestrator | skipping: no hosts matched 2026-02-19 05:32:57.807325 | orchestrator | 2026-02-19 05:32:57.807336 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-19 05:32:57.807347 | orchestrator | skipping: no hosts matched 2026-02-19 05:32:57.807358 | orchestrator | 2026-02-19 05:32:57.807369 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-19 05:32:57.807383 | orchestrator | 2026-02-19 05:32:57.807402 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-19 05:32:57.807420 | orchestrator | Thursday 19 February 2026 05:32:28 +0000 (0:00:04.035) 0:03:39.320 ***** 2026-02-19 05:32:57.807438 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:32:57.807455 | orchestrator | 2026-02-19 05:32:57.807473 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-19 05:32:57.807491 | orchestrator | Thursday 19 February 2026 05:32:29 +0000 (0:00:01.863) 0:03:41.184 ***** 2026-02-19 05:32:57.807508 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.807525 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.807542 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.807560 | orchestrator | 2026-02-19 05:32:57.807577 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-19 05:32:57.807595 | orchestrator | Thursday 19 February 2026 05:32:33 +0000 (0:00:03.337) 0:03:44.521 ***** 2026-02-19 05:32:57.807613 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.807632 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.807643 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:32:57.807654 | orchestrator | 2026-02-19 05:32:57.807689 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-19 05:32:57.807700 | orchestrator | Thursday 19 February 2026 05:32:36 +0000 (0:00:03.480) 0:03:48.002 ***** 2026-02-19 05:32:57.807711 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.807722 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.807733 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.807743 | orchestrator | 2026-02-19 05:32:57.807754 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-19 05:32:57.807765 | orchestrator | Thursday 19 February 2026 05:32:40 +0000 (0:00:03.393) 0:03:51.396 ***** 2026-02-19 05:32:57.807775 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.807786 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.807810 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:32:57.807821 | orchestrator | 2026-02-19 05:32:57.807832 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-19 05:32:57.807842 | orchestrator | Thursday 19 February 2026 05:32:43 +0000 (0:00:03.771) 0:03:55.167 ***** 2026-02-19 05:32:57.807853 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.807864 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.807874 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.807885 | orchestrator | 2026-02-19 05:32:57.807896 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-19 05:32:57.807906 | orchestrator | Thursday 19 February 2026 05:32:50 +0000 (0:00:06.086) 0:04:01.254 ***** 2026-02-19 05:32:57.807933 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:32:57.807947 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.807958 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.807969 | orchestrator | 2026-02-19 05:32:57.807979 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-19 05:32:57.807990 | orchestrator | Thursday 19 February 2026 05:32:53 +0000 (0:00:03.049) 0:04:04.304 ***** 2026-02-19 05:32:57.808001 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:32:57.808011 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:32:57.808022 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:32:57.808033 | orchestrator | 2026-02-19 05:32:57.808043 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-19 05:32:57.808054 | orchestrator | Thursday 19 February 2026 05:32:54 +0000 (0:00:01.349) 0:04:05.653 ***** 2026-02-19 05:32:57.808065 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:32:57.808076 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:32:57.808086 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:32:57.808097 | orchestrator | 2026-02-19 05:32:57.808118 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-19 05:33:18.011158 | orchestrator | Thursday 19 February 2026 05:32:57 +0000 (0:00:03.322) 0:04:08.975 ***** 2026-02-19 05:33:18.011241 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:33:18.011248 | orchestrator | 2026-02-19 05:33:18.011255 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-19 05:33:18.011260 | orchestrator | Thursday 19 February 2026 05:32:59 +0000 (0:00:01.866) 0:04:10.842 ***** 2026-02-19 05:33:18.011265 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:33:18.011272 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:33:18.011277 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:33:18.011282 | orchestrator | 2026-02-19 05:33:18.011287 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:33:18.011293 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-19 05:33:18.011300 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-19 05:33:18.011305 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-19 05:33:18.011310 | orchestrator | 2026-02-19 05:33:18.011315 | orchestrator | 2026-02-19 05:33:18.011320 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:33:18.011324 | orchestrator | Thursday 19 February 2026 05:33:17 +0000 (0:00:17.905) 0:04:28.748 ***** 2026-02-19 05:33:18.011329 | orchestrator | =============================================================================== 2026-02-19 05:33:18.011334 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 74.57s 2026-02-19 05:33:18.011339 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.80s 2026-02-19 05:33:18.011344 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.91s 2026-02-19 05:33:18.011349 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.20s 2026-02-19 05:33:18.011353 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.09s 2026-02-19 05:33:18.011358 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.87s 2026-02-19 05:33:18.011363 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.23s 2026-02-19 05:33:18.011367 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.13s 2026-02-19 05:33:18.011372 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.96s 2026-02-19 05:33:18.011377 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.96s 2026-02-19 05:33:18.011397 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.77s 2026-02-19 05:33:18.011402 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.64s 2026-02-19 05:33:18.011406 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.54s 2026-02-19 05:33:18.011411 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.50s 2026-02-19 05:33:18.011417 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.48s 2026-02-19 05:33:18.011422 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.40s 2026-02-19 05:33:18.011426 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.39s 2026-02-19 05:33:18.011431 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.36s 2026-02-19 05:33:18.011436 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.34s 2026-02-19 05:33:18.011441 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.32s 2026-02-19 05:33:18.293900 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-19 05:33:20.252791 | orchestrator | 2026-02-19 05:33:20 | INFO  | Task 510bdb03-4065-4959-bd62-695e38e58437 (rabbitmq) was prepared for execution. 2026-02-19 05:33:20.252898 | orchestrator | 2026-02-19 05:33:20 | INFO  | It takes a moment until task 510bdb03-4065-4959-bd62-695e38e58437 (rabbitmq) has been started and output is visible here. 2026-02-19 05:33:50.931480 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-19 05:33:50.931579 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-19 05:33:50.931598 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-19 05:33:50.931606 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-19 05:33:50.931621 | orchestrator | 2026-02-19 05:33:50.931635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 05:33:50.931645 | orchestrator | 2026-02-19 05:33:50.931656 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 05:33:50.931669 | orchestrator | Thursday 19 February 2026 05:33:25 +0000 (0:00:01.156) 0:00:01.156 ***** 2026-02-19 05:33:50.931680 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:33:50.931768 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:33:50.931780 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:33:50.931790 | orchestrator | 2026-02-19 05:33:50.931802 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 05:33:50.931833 | orchestrator | Thursday 19 February 2026 05:33:26 +0000 (0:00:00.935) 0:00:02.091 ***** 2026-02-19 05:33:50.931846 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-19 05:33:50.931858 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-19 05:33:50.931870 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-19 05:33:50.931881 | orchestrator | 2026-02-19 05:33:50.931892 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-19 05:33:50.931902 | orchestrator | 2026-02-19 05:33:50.931912 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-19 05:33:50.931921 | orchestrator | Thursday 19 February 2026 05:33:27 +0000 (0:00:00.939) 0:00:03.030 ***** 2026-02-19 05:33:50.931931 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:33:50.931942 | orchestrator | 2026-02-19 05:33:50.931952 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-19 05:33:50.931961 | orchestrator | Thursday 19 February 2026 05:33:28 +0000 (0:00:01.149) 0:00:04.180 ***** 2026-02-19 05:33:50.931971 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:33:50.932004 | orchestrator | 2026-02-19 05:33:50.932014 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-19 05:33:50.932024 | orchestrator | Thursday 19 February 2026 05:33:29 +0000 (0:00:01.407) 0:00:05.587 ***** 2026-02-19 05:33:50.932033 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:33:50.932042 | orchestrator | 2026-02-19 05:33:50.932052 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-19 05:33:50.932062 | orchestrator | Thursday 19 February 2026 05:33:32 +0000 (0:00:02.339) 0:00:07.927 ***** 2026-02-19 05:33:50.932072 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:33:50.932082 | orchestrator | 2026-02-19 05:33:50.932092 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-19 05:33:50.932102 | orchestrator | Thursday 19 February 2026 05:33:42 +0000 (0:00:09.881) 0:00:17.808 ***** 2026-02-19 05:33:50.932111 | orchestrator | ok: [testbed-node-0] => { 2026-02-19 05:33:50.932121 | orchestrator |  "changed": false, 2026-02-19 05:33:50.932132 | orchestrator |  "msg": "All assertions passed" 2026-02-19 05:33:50.932142 | orchestrator | } 2026-02-19 05:33:50.932153 | orchestrator | 2026-02-19 05:33:50.932162 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-19 05:33:50.932172 | orchestrator | Thursday 19 February 2026 05:33:42 +0000 (0:00:00.318) 0:00:18.127 ***** 2026-02-19 05:33:50.932181 | orchestrator | ok: [testbed-node-0] => { 2026-02-19 05:33:50.932192 | orchestrator |  "changed": false, 2026-02-19 05:33:50.932203 | orchestrator |  "msg": "All assertions passed" 2026-02-19 05:33:50.932214 | orchestrator | } 2026-02-19 05:33:50.932226 | orchestrator | 2026-02-19 05:33:50.932237 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-19 05:33:50.932246 | orchestrator | Thursday 19 February 2026 05:33:43 +0000 (0:00:00.688) 0:00:18.815 ***** 2026-02-19 05:33:50.932253 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:33:50.932260 | orchestrator | 2026-02-19 05:33:50.932267 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-19 05:33:50.932274 | orchestrator | Thursday 19 February 2026 05:33:44 +0000 (0:00:00.940) 0:00:19.756 ***** 2026-02-19 05:33:50.932280 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:33:50.932287 | orchestrator | 2026-02-19 05:33:50.932293 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-19 05:33:50.932300 | orchestrator | Thursday 19 February 2026 05:33:45 +0000 (0:00:01.437) 0:00:21.193 ***** 2026-02-19 05:33:50.932306 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:33:50.932313 | orchestrator | 2026-02-19 05:33:50.932319 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-19 05:33:50.932326 | orchestrator | Thursday 19 February 2026 05:33:47 +0000 (0:00:02.106) 0:00:23.300 ***** 2026-02-19 05:33:50.932332 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:33:50.932339 | orchestrator | 2026-02-19 05:33:50.932346 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-19 05:33:50.932352 | orchestrator | Thursday 19 February 2026 05:33:48 +0000 (0:00:01.031) 0:00:24.332 ***** 2026-02-19 05:33:50.932384 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:33:50.932410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:33:50.932419 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:33:50.932426 | orchestrator | 2026-02-19 05:33:50.932433 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-19 05:33:50.932440 | orchestrator | Thursday 19 February 2026 05:33:49 +0000 (0:00:00.788) 0:00:25.120 ***** 2026-02-19 05:33:50.932452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:34:02.178970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:34:02.179128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:34:02.179158 | orchestrator | 2026-02-19 05:34:02.179179 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-19 05:34:02.179200 | orchestrator | Thursday 19 February 2026 05:33:50 +0000 (0:00:01.392) 0:00:26.513 ***** 2026-02-19 05:34:02.179216 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-19 05:34:02.179234 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-19 05:34:02.179252 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-19 05:34:02.179272 | orchestrator | 2026-02-19 05:34:02.179291 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-19 05:34:02.179309 | orchestrator | Thursday 19 February 2026 05:33:52 +0000 (0:00:01.354) 0:00:27.867 ***** 2026-02-19 05:34:02.179328 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-19 05:34:02.179347 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-19 05:34:02.179367 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-19 05:34:02.179384 | orchestrator | 2026-02-19 05:34:02.179402 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-19 05:34:02.179421 | orchestrator | Thursday 19 February 2026 05:33:54 +0000 (0:00:01.991) 0:00:29.859 ***** 2026-02-19 05:34:02.179439 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-19 05:34:02.179458 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-19 05:34:02.179478 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-19 05:34:02.179500 | orchestrator | 2026-02-19 05:34:02.179520 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-19 05:34:02.179540 | orchestrator | Thursday 19 February 2026 05:33:55 +0000 (0:00:01.323) 0:00:31.182 ***** 2026-02-19 05:34:02.179562 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-19 05:34:02.179600 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-19 05:34:02.179619 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-19 05:34:02.179632 | orchestrator | 2026-02-19 05:34:02.179644 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-19 05:34:02.179677 | orchestrator | Thursday 19 February 2026 05:33:56 +0000 (0:00:01.372) 0:00:32.555 ***** 2026-02-19 05:34:02.179719 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-19 05:34:02.179731 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-19 05:34:02.179744 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-19 05:34:02.179757 | orchestrator | 2026-02-19 05:34:02.179769 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-19 05:34:02.179781 | orchestrator | Thursday 19 February 2026 05:33:58 +0000 (0:00:01.302) 0:00:33.857 ***** 2026-02-19 05:34:02.179793 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-19 05:34:02.179805 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-19 05:34:02.179817 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-19 05:34:02.179829 | orchestrator | 2026-02-19 05:34:02.179841 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-19 05:34:02.179852 | orchestrator | Thursday 19 February 2026 05:33:59 +0000 (0:00:01.512) 0:00:35.370 ***** 2026-02-19 05:34:02.179872 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:34:02.179884 | orchestrator | 2026-02-19 05:34:02.179895 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-19 05:34:02.179905 | orchestrator | Thursday 19 February 2026 05:34:00 +0000 (0:00:00.941) 0:00:36.312 ***** 2026-02-19 05:34:02.179918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:34:02.179932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:34:02.180014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:34:08.346140 | orchestrator | 2026-02-19 05:34:08.346222 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-19 05:34:08.346231 | orchestrator | Thursday 19 February 2026 05:34:02 +0000 (0:00:01.444) 0:00:37.757 ***** 2026-02-19 05:34:08.346254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:34:08.346262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:34:08.346267 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:34:08.346274 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:34:08.346279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:34:08.346298 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:34:08.346304 | orchestrator | 2026-02-19 05:34:08.346312 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-19 05:34:08.346319 | orchestrator | Thursday 19 February 2026 05:34:02 +0000 (0:00:00.405) 0:00:38.162 ***** 2026-02-19 05:34:08.346347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:34:08.346357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:34:08.346365 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:34:08.346374 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:34:08.346379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:34:08.346388 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:34:08.346393 | orchestrator | 2026-02-19 05:34:08.346398 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-19 05:34:08.346402 | orchestrator | Thursday 19 February 2026 05:34:03 +0000 (0:00:01.021) 0:00:39.184 ***** 2026-02-19 05:34:08.346407 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:34:08.346412 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:34:08.346417 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:34:08.346421 | orchestrator | 2026-02-19 05:34:08.346426 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-19 05:34:08.346430 | orchestrator | Thursday 19 February 2026 05:34:07 +0000 (0:00:03.451) 0:00:42.636 ***** 2026-02-19 05:34:08.346442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:35:06.811226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:35:06.811572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-19 05:35:06.811650 | orchestrator | 2026-02-19 05:35:06.811678 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-19 05:35:06.811699 | orchestrator | Thursday 19 February 2026 05:34:08 +0000 (0:00:01.298) 0:00:43.934 ***** 2026-02-19 05:35:06.811841 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:35:06.811863 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:35:06.811883 | orchestrator | } 2026-02-19 05:35:06.811895 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:35:06.811907 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:35:06.811918 | orchestrator | } 2026-02-19 05:35:06.811941 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:35:06.811952 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:35:06.811963 | orchestrator | } 2026-02-19 05:35:06.811975 | orchestrator | 2026-02-19 05:35:06.811986 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:35:06.811998 | orchestrator | Thursday 19 February 2026 05:34:08 +0000 (0:00:00.375) 0:00:44.310 ***** 2026-02-19 05:35:06.812057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:35:06.812170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:35:06.812188 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-19 05:35:06.812214 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-19 05:35:06.812237 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:35:06.812248 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:35:06.812260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-19 05:35:06.812272 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:35:06.812283 | orchestrator | 2026-02-19 05:35:06.812294 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-19 05:35:06.812305 | orchestrator | Thursday 19 February 2026 05:34:09 +0000 (0:00:01.184) 0:00:45.494 ***** 2026-02-19 05:35:06.812316 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:35:06.812327 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:35:06.812338 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:35:06.812348 | orchestrator | 2026-02-19 05:35:06.812359 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-19 05:35:06.812370 | orchestrator | 2026-02-19 05:35:06.812381 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-19 05:35:06.812399 | orchestrator | Thursday 19 February 2026 05:34:10 +0000 (0:00:00.992) 0:00:46.487 ***** 2026-02-19 05:35:06.812418 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:35:06.812437 | orchestrator | 2026-02-19 05:35:06.812455 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-19 05:35:06.812473 | orchestrator | Thursday 19 February 2026 05:34:12 +0000 (0:00:01.233) 0:00:47.721 ***** 2026-02-19 05:35:06.812490 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:35:06.812508 | orchestrator | 2026-02-19 05:35:06.812527 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-19 05:35:06.812547 | orchestrator | Thursday 19 February 2026 05:34:22 +0000 (0:00:10.736) 0:00:58.457 ***** 2026-02-19 05:35:06.812563 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:35:06.812580 | orchestrator | 2026-02-19 05:35:06.812597 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-19 05:35:06.812617 | orchestrator | Thursday 19 February 2026 05:34:31 +0000 (0:00:08.291) 0:01:06.749 ***** 2026-02-19 05:35:06.812635 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:35:06.812655 | orchestrator | 2026-02-19 05:35:06.812674 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-19 05:35:06.812692 | orchestrator | 2026-02-19 05:35:06.812740 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-19 05:35:06.812758 | orchestrator | Thursday 19 February 2026 05:34:43 +0000 (0:00:12.499) 0:01:19.248 ***** 2026-02-19 05:35:06.812777 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:35:06.812792 | orchestrator | 2026-02-19 05:35:06.812803 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-19 05:35:06.812814 | orchestrator | Thursday 19 February 2026 05:34:44 +0000 (0:00:01.117) 0:01:20.365 ***** 2026-02-19 05:35:06.812825 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:35:06.812867 | orchestrator | 2026-02-19 05:35:06.812879 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-19 05:35:06.812890 | orchestrator | Thursday 19 February 2026 05:34:53 +0000 (0:00:08.423) 0:01:28.789 ***** 2026-02-19 05:35:06.812915 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:35:57.450490 | orchestrator | 2026-02-19 05:35:57.450609 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-19 05:35:57.450626 | orchestrator | Thursday 19 February 2026 05:35:06 +0000 (0:00:13.602) 0:01:42.392 ***** 2026-02-19 05:35:57.450639 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:35:57.450651 | orchestrator | 2026-02-19 05:35:57.450681 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-19 05:35:57.450693 | orchestrator | 2026-02-19 05:35:57.450704 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-19 05:35:57.450778 | orchestrator | Thursday 19 February 2026 05:35:17 +0000 (0:00:10.369) 0:01:52.762 ***** 2026-02-19 05:35:57.450792 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:35:57.450804 | orchestrator | 2026-02-19 05:35:57.450815 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-19 05:35:57.450827 | orchestrator | Thursday 19 February 2026 05:35:18 +0000 (0:00:01.235) 0:01:53.998 ***** 2026-02-19 05:35:57.450838 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:35:57.450849 | orchestrator | 2026-02-19 05:35:57.450860 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-19 05:35:57.450871 | orchestrator | Thursday 19 February 2026 05:35:27 +0000 (0:00:09.299) 0:02:03.297 ***** 2026-02-19 05:35:57.450882 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:35:57.450893 | orchestrator | 2026-02-19 05:35:57.450904 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-19 05:35:57.450914 | orchestrator | Thursday 19 February 2026 05:35:41 +0000 (0:00:13.348) 0:02:16.646 ***** 2026-02-19 05:35:57.450925 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:35:57.450936 | orchestrator | 2026-02-19 05:35:57.450947 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-19 05:35:57.450958 | orchestrator | 2026-02-19 05:35:57.450969 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-19 05:35:57.450979 | orchestrator | Thursday 19 February 2026 05:35:51 +0000 (0:00:10.378) 0:02:27.025 ***** 2026-02-19 05:35:57.450990 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:35:57.451001 | orchestrator | 2026-02-19 05:35:57.451012 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-19 05:35:57.451023 | orchestrator | Thursday 19 February 2026 05:35:51 +0000 (0:00:00.542) 0:02:27.567 ***** 2026-02-19 05:35:57.451035 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:35:57.451049 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:35:57.451061 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:35:57.451074 | orchestrator | 2026-02-19 05:35:57.451098 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:35:57.451112 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 05:35:57.451126 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 05:35:57.451138 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-19 05:35:57.451150 | orchestrator | 2026-02-19 05:35:57.451163 | orchestrator | 2026-02-19 05:35:57.451176 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:35:57.451189 | orchestrator | Thursday 19 February 2026 05:35:57 +0000 (0:00:05.139) 0:02:32.707 ***** 2026-02-19 05:35:57.451201 | orchestrator | =============================================================================== 2026-02-19 05:35:57.451240 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 35.24s 2026-02-19 05:35:57.451252 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 33.25s 2026-02-19 05:35:57.451265 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 28.46s 2026-02-19 05:35:57.451278 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.88s 2026-02-19 05:35:57.451290 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 5.14s 2026-02-19 05:35:57.451302 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 3.59s 2026-02-19 05:35:57.451314 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.45s 2026-02-19 05:35:57.451326 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.34s 2026-02-19 05:35:57.451338 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.11s 2026-02-19 05:35:57.451350 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.99s 2026-02-19 05:35:57.451363 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.51s 2026-02-19 05:35:57.451375 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.44s 2026-02-19 05:35:57.451387 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.44s 2026-02-19 05:35:57.451398 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.41s 2026-02-19 05:35:57.451409 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.39s 2026-02-19 05:35:57.451419 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.37s 2026-02-19 05:35:57.451430 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.35s 2026-02-19 05:35:57.451441 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.32s 2026-02-19 05:35:57.451451 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.30s 2026-02-19 05:35:57.451462 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.30s 2026-02-19 05:35:57.728994 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-19 05:35:59.776852 | orchestrator | 2026-02-19 05:35:59 | INFO  | Task ac47d7ac-3f1e-4e0b-8ef3-95ea97e6e105 (openvswitch) was prepared for execution. 2026-02-19 05:35:59.776944 | orchestrator | 2026-02-19 05:35:59 | INFO  | It takes a moment until task ac47d7ac-3f1e-4e0b-8ef3-95ea97e6e105 (openvswitch) has been started and output is visible here. 2026-02-19 05:36:25.994413 | orchestrator | 2026-02-19 05:36:25.994541 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 05:36:25.994572 | orchestrator | 2026-02-19 05:36:25.994595 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 05:36:25.994615 | orchestrator | Thursday 19 February 2026 05:36:05 +0000 (0:00:01.690) 0:00:01.690 ***** 2026-02-19 05:36:25.994636 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:36:25.994650 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:36:25.994661 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:36:25.994671 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:36:25.994682 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:36:25.994693 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:36:25.994704 | orchestrator | 2026-02-19 05:36:25.994715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 05:36:25.994834 | orchestrator | Thursday 19 February 2026 05:36:08 +0000 (0:00:02.572) 0:00:04.263 ***** 2026-02-19 05:36:25.994856 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 05:36:25.994876 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 05:36:25.994894 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 05:36:25.994912 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 05:36:25.994965 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 05:36:25.994987 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-19 05:36:25.995005 | orchestrator | 2026-02-19 05:36:25.995019 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-19 05:36:25.995031 | orchestrator | 2026-02-19 05:36:25.995043 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-19 05:36:25.995055 | orchestrator | Thursday 19 February 2026 05:36:11 +0000 (0:00:02.938) 0:00:07.201 ***** 2026-02-19 05:36:25.995068 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:36:25.995082 | orchestrator | 2026-02-19 05:36:25.995094 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-19 05:36:25.995107 | orchestrator | Thursday 19 February 2026 05:36:14 +0000 (0:00:03.141) 0:00:10.343 ***** 2026-02-19 05:36:25.995119 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-19 05:36:25.995132 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-19 05:36:25.995144 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-19 05:36:25.995156 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-19 05:36:25.995168 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-19 05:36:25.995180 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-19 05:36:25.995192 | orchestrator | 2026-02-19 05:36:25.995205 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-19 05:36:25.995218 | orchestrator | Thursday 19 February 2026 05:36:16 +0000 (0:00:02.427) 0:00:12.771 ***** 2026-02-19 05:36:25.995238 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-19 05:36:25.995257 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-19 05:36:25.995275 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-19 05:36:25.995294 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-19 05:36:25.995313 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-19 05:36:25.995332 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-19 05:36:25.995350 | orchestrator | 2026-02-19 05:36:25.995370 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-19 05:36:25.995389 | orchestrator | Thursday 19 February 2026 05:36:19 +0000 (0:00:02.539) 0:00:15.310 ***** 2026-02-19 05:36:25.995408 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-19 05:36:25.995419 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:36:25.995432 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-19 05:36:25.995442 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:36:25.995453 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-19 05:36:25.995464 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:36:25.995475 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-19 05:36:25.995485 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:36:25.995496 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-19 05:36:25.995507 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:36:25.995517 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-19 05:36:25.995528 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:36:25.995539 | orchestrator | 2026-02-19 05:36:25.995550 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-19 05:36:25.995561 | orchestrator | Thursday 19 February 2026 05:36:21 +0000 (0:00:02.164) 0:00:17.475 ***** 2026-02-19 05:36:25.995572 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:36:25.995583 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:36:25.995593 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:36:25.995606 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:36:25.995626 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:36:25.995655 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:36:25.995673 | orchestrator | 2026-02-19 05:36:25.995692 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-19 05:36:25.995712 | orchestrator | Thursday 19 February 2026 05:36:23 +0000 (0:00:02.036) 0:00:19.512 ***** 2026-02-19 05:36:25.995798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:25.995819 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:25.995832 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:25.995843 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:25.995855 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:25.995881 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:25.995900 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:28.265977 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266122 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266134 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266140 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266171 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266176 | orchestrator | 2026-02-19 05:36:28.266182 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-19 05:36:28.266188 | orchestrator | Thursday 19 February 2026 05:36:25 +0000 (0:00:02.604) 0:00:22.116 ***** 2026-02-19 05:36:28.266204 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266209 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266219 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266224 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266233 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266241 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:28.266250 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:33.848933 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849047 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849063 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849115 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849129 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849141 | orchestrator | 2026-02-19 05:36:33.849155 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-19 05:36:33.849168 | orchestrator | Thursday 19 February 2026 05:36:29 +0000 (0:00:03.376) 0:00:25.493 ***** 2026-02-19 05:36:33.849179 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:36:33.849191 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:36:33.849202 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:36:33.849212 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:36:33.849223 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:36:33.849233 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:36:33.849244 | orchestrator | 2026-02-19 05:36:33.849255 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-19 05:36:33.849283 | orchestrator | Thursday 19 February 2026 05:36:31 +0000 (0:00:02.346) 0:00:27.840 ***** 2026-02-19 05:36:33.849296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:33.849428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-19 05:36:37.500115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:37.500251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:37.500264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:37.500288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:37.500298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:37.500323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-19 05:36:37.500334 | orchestrator | 2026-02-19 05:36:37.500345 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-19 05:36:37.500355 | orchestrator | Thursday 19 February 2026 05:36:35 +0000 (0:00:03.309) 0:00:31.149 ***** 2026-02-19 05:36:37.500371 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:36:37.500381 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:36:37.500390 | orchestrator | } 2026-02-19 05:36:37.500399 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:36:37.500407 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:36:37.500416 | orchestrator | } 2026-02-19 05:36:37.500425 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:36:37.500433 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:36:37.500442 | orchestrator | } 2026-02-19 05:36:37.500450 | orchestrator | changed: [testbed-node-3] => { 2026-02-19 05:36:37.500459 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:36:37.500468 | orchestrator | } 2026-02-19 05:36:37.500476 | orchestrator | changed: [testbed-node-4] => { 2026-02-19 05:36:37.500485 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:36:37.500493 | orchestrator | } 2026-02-19 05:36:37.500502 | orchestrator | changed: [testbed-node-5] => { 2026-02-19 05:36:37.500511 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:36:37.500520 | orchestrator | } 2026-02-19 05:36:37.500529 | orchestrator | 2026-02-19 05:36:37.500537 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:36:37.500546 | orchestrator | Thursday 19 February 2026 05:36:37 +0000 (0:00:02.028) 0:00:33.177 ***** 2026-02-19 05:36:37.500555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-19 05:36:37.500570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-19 05:36:37.500579 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:36:37.500589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-19 05:36:37.500598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-19 05:36:37.500621 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:37:08.103864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-19 05:37:08.104001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-19 05:37:08.104018 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:37:08.104036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-19 05:37:08.104072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-19 05:37:08.104092 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:37:08.104111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-19 05:37:08.104165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-19 05:37:08.104176 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:37:08.104186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-19 05:37:08.104197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-19 05:37:08.104206 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:37:08.104216 | orchestrator | 2026-02-19 05:37:08.104228 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 05:37:08.104239 | orchestrator | Thursday 19 February 2026 05:36:39 +0000 (0:00:02.571) 0:00:35.748 ***** 2026-02-19 05:37:08.104249 | orchestrator | 2026-02-19 05:37:08.104258 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 05:37:08.104268 | orchestrator | Thursday 19 February 2026 05:36:40 +0000 (0:00:00.508) 0:00:36.257 ***** 2026-02-19 05:37:08.104277 | orchestrator | 2026-02-19 05:37:08.104287 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 05:37:08.104302 | orchestrator | Thursday 19 February 2026 05:36:40 +0000 (0:00:00.522) 0:00:36.779 ***** 2026-02-19 05:37:08.104311 | orchestrator | 2026-02-19 05:37:08.104322 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 05:37:08.104332 | orchestrator | Thursday 19 February 2026 05:36:41 +0000 (0:00:00.504) 0:00:37.284 ***** 2026-02-19 05:37:08.104343 | orchestrator | 2026-02-19 05:37:08.104354 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 05:37:08.104365 | orchestrator | Thursday 19 February 2026 05:36:41 +0000 (0:00:00.683) 0:00:37.967 ***** 2026-02-19 05:37:08.104376 | orchestrator | 2026-02-19 05:37:08.104387 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-19 05:37:08.104406 | orchestrator | Thursday 19 February 2026 05:36:42 +0000 (0:00:00.504) 0:00:38.471 ***** 2026-02-19 05:37:08.104417 | orchestrator | 2026-02-19 05:37:08.104432 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-19 05:37:08.104451 | orchestrator | Thursday 19 February 2026 05:36:43 +0000 (0:00:00.894) 0:00:39.366 ***** 2026-02-19 05:37:08.104469 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:37:08.104488 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:37:08.104505 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:37:08.104522 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:37:08.104538 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:37:08.104553 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:37:08.104569 | orchestrator | 2026-02-19 05:37:08.104586 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-19 05:37:08.104605 | orchestrator | Thursday 19 February 2026 05:36:54 +0000 (0:00:11.682) 0:00:51.048 ***** 2026-02-19 05:37:08.104623 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:37:08.104642 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:37:08.104659 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:37:08.104677 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:37:08.104688 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:37:08.104699 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:37:08.104708 | orchestrator | 2026-02-19 05:37:08.104718 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-19 05:37:08.104728 | orchestrator | Thursday 19 February 2026 05:36:57 +0000 (0:00:02.308) 0:00:53.357 ***** 2026-02-19 05:37:08.104800 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:37:08.104819 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:37:08.104861 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:37:08.104876 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:37:08.104893 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:37:08.104908 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:37:08.104923 | orchestrator | 2026-02-19 05:37:08.104940 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-19 05:37:08.104968 | orchestrator | Thursday 19 February 2026 05:37:08 +0000 (0:00:10.862) 0:01:04.219 ***** 2026-02-19 05:37:23.942932 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-19 05:37:23.943075 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-19 05:37:23.943091 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-19 05:37:23.943103 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-19 05:37:23.943115 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-19 05:37:23.943126 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-19 05:37:23.943137 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-19 05:37:23.943148 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-19 05:37:23.943158 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-19 05:37:23.943169 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-19 05:37:23.943180 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-19 05:37:23.943191 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-19 05:37:23.943202 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 05:37:23.943238 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 05:37:23.943251 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 05:37:23.943262 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 05:37:23.943272 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 05:37:23.943283 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-19 05:37:23.943295 | orchestrator | 2026-02-19 05:37:23.943307 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-19 05:37:23.943320 | orchestrator | Thursday 19 February 2026 05:37:15 +0000 (0:00:07.893) 0:01:12.113 ***** 2026-02-19 05:37:23.943346 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-19 05:37:23.943359 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:37:23.943371 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-19 05:37:23.943382 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:37:23.943393 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-19 05:37:23.943404 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:37:23.943415 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-19 05:37:23.943425 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-19 05:37:23.943436 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-19 05:37:23.943449 | orchestrator | 2026-02-19 05:37:23.943461 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-19 05:37:23.943474 | orchestrator | Thursday 19 February 2026 05:37:19 +0000 (0:00:03.363) 0:01:15.477 ***** 2026-02-19 05:37:23.943488 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-19 05:37:23.943501 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:37:23.943514 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-19 05:37:23.943526 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:37:23.943538 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-19 05:37:23.943548 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:37:23.943559 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-19 05:37:23.943570 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-19 05:37:23.943581 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-19 05:37:23.943591 | orchestrator | 2026-02-19 05:37:23.943602 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:37:23.943615 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 05:37:23.943627 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 05:37:23.943639 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-19 05:37:23.943653 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:37:23.943700 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:37:23.943728 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-19 05:37:23.943772 | orchestrator | 2026-02-19 05:37:23.943804 | orchestrator | 2026-02-19 05:37:23.943823 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:37:23.943840 | orchestrator | Thursday 19 February 2026 05:37:23 +0000 (0:00:04.209) 0:01:19.687 ***** 2026-02-19 05:37:23.943858 | orchestrator | =============================================================================== 2026-02-19 05:37:23.943876 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.68s 2026-02-19 05:37:23.943894 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.86s 2026-02-19 05:37:23.943914 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.89s 2026-02-19 05:37:23.943932 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.21s 2026-02-19 05:37:23.943951 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.62s 2026-02-19 05:37:23.943969 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.38s 2026-02-19 05:37:23.943987 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.36s 2026-02-19 05:37:23.944006 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.31s 2026-02-19 05:37:23.944025 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.14s 2026-02-19 05:37:23.944043 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.94s 2026-02-19 05:37:23.944063 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.60s 2026-02-19 05:37:23.944082 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.57s 2026-02-19 05:37:23.944101 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.57s 2026-02-19 05:37:23.944113 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.54s 2026-02-19 05:37:23.944124 | orchestrator | module-load : Load modules ---------------------------------------------- 2.43s 2026-02-19 05:37:23.944135 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.35s 2026-02-19 05:37:23.944145 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.31s 2026-02-19 05:37:23.944156 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.16s 2026-02-19 05:37:23.944167 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.04s 2026-02-19 05:37:23.944177 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.03s 2026-02-19 05:37:24.210504 | orchestrator | + osism apply -a upgrade ovn 2026-02-19 05:37:26.249119 | orchestrator | 2026-02-19 05:37:26 | INFO  | Task 5d6a165e-4d44-4011-bbc9-6e7db1c2727a (ovn) was prepared for execution. 2026-02-19 05:37:26.249227 | orchestrator | 2026-02-19 05:37:26 | INFO  | It takes a moment until task 5d6a165e-4d44-4011-bbc9-6e7db1c2727a (ovn) has been started and output is visible here. 2026-02-19 05:37:46.207393 | orchestrator | 2026-02-19 05:37:46.207514 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-19 05:37:46.207531 | orchestrator | 2026-02-19 05:37:46.207543 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-19 05:37:46.207555 | orchestrator | Thursday 19 February 2026 05:37:31 +0000 (0:00:01.448) 0:00:01.448 ***** 2026-02-19 05:37:46.207566 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:37:46.207578 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:37:46.207589 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:37:46.207600 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:37:46.207610 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:37:46.207621 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:37:46.207632 | orchestrator | 2026-02-19 05:37:46.207643 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-19 05:37:46.207654 | orchestrator | Thursday 19 February 2026 05:37:34 +0000 (0:00:02.794) 0:00:04.243 ***** 2026-02-19 05:37:46.207665 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-19 05:37:46.207676 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-19 05:37:46.207710 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-19 05:37:46.207729 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-19 05:37:46.207778 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-19 05:37:46.207797 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-19 05:37:46.207816 | orchestrator | 2026-02-19 05:37:46.207859 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-19 05:37:46.207871 | orchestrator | 2026-02-19 05:37:46.207883 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-19 05:37:46.207893 | orchestrator | Thursday 19 February 2026 05:37:36 +0000 (0:00:02.294) 0:00:06.538 ***** 2026-02-19 05:37:46.207905 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:37:46.207919 | orchestrator | 2026-02-19 05:37:46.207932 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-19 05:37:46.207944 | orchestrator | Thursday 19 February 2026 05:37:39 +0000 (0:00:02.646) 0:00:09.184 ***** 2026-02-19 05:37:46.207958 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.207974 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.207987 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.207999 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208012 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208060 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208073 | orchestrator | 2026-02-19 05:37:46.208095 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-19 05:37:46.208106 | orchestrator | Thursday 19 February 2026 05:37:41 +0000 (0:00:02.347) 0:00:11.532 ***** 2026-02-19 05:37:46.208117 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208128 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208139 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208151 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208162 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208172 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208183 | orchestrator | 2026-02-19 05:37:46.208194 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-19 05:37:46.208205 | orchestrator | Thursday 19 February 2026 05:37:44 +0000 (0:00:02.449) 0:00:13.981 ***** 2026-02-19 05:37:46.208216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208231 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:46.208256 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529349 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529474 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529486 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529496 | orchestrator | 2026-02-19 05:37:53.529505 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-19 05:37:53.529514 | orchestrator | Thursday 19 February 2026 05:37:46 +0000 (0:00:01.913) 0:00:15.895 ***** 2026-02-19 05:37:53.529522 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529530 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529537 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529544 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529596 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529622 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529630 | orchestrator | 2026-02-19 05:37:53.529637 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-19 05:37:53.529645 | orchestrator | Thursday 19 February 2026 05:37:49 +0000 (0:00:02.965) 0:00:18.860 ***** 2026-02-19 05:37:53.529654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:37:53.529710 | orchestrator | 2026-02-19 05:37:53.529718 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-19 05:37:53.529727 | orchestrator | Thursday 19 February 2026 05:37:51 +0000 (0:00:02.520) 0:00:21.381 ***** 2026-02-19 05:37:53.529734 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:37:53.529743 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:37:53.529775 | orchestrator | } 2026-02-19 05:37:53.529783 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:37:53.529790 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:37:53.529797 | orchestrator | } 2026-02-19 05:37:53.529804 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:37:53.529811 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:37:53.529818 | orchestrator | } 2026-02-19 05:37:53.529825 | orchestrator | changed: [testbed-node-3] => { 2026-02-19 05:37:53.529832 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:37:53.529840 | orchestrator | } 2026-02-19 05:37:53.529851 | orchestrator | changed: [testbed-node-4] => { 2026-02-19 05:37:53.529860 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:37:53.529868 | orchestrator | } 2026-02-19 05:37:53.529876 | orchestrator | changed: [testbed-node-5] => { 2026-02-19 05:37:53.529884 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:37:53.529892 | orchestrator | } 2026-02-19 05:37:53.529900 | orchestrator | 2026-02-19 05:37:53.529909 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:37:53.529917 | orchestrator | Thursday 19 February 2026 05:37:53 +0000 (0:00:01.737) 0:00:23.119 ***** 2026-02-19 05:37:53.529934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:38:23.529836 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:38:23.529942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:38:23.529955 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:38:23.529962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:38:23.529969 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:38:23.529975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:38:23.529982 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:38:23.529988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:38:23.530059 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:38:23.530067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:38:23.530073 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:38:23.530080 | orchestrator | 2026-02-19 05:38:23.530087 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-19 05:38:23.530103 | orchestrator | Thursday 19 February 2026 05:37:55 +0000 (0:00:02.259) 0:00:25.378 ***** 2026-02-19 05:38:23.530110 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:38:23.530117 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:38:23.530122 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:38:23.530128 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:38:23.530134 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:38:23.530140 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:38:23.530145 | orchestrator | 2026-02-19 05:38:23.530151 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-19 05:38:23.530157 | orchestrator | Thursday 19 February 2026 05:37:59 +0000 (0:00:03.661) 0:00:29.039 ***** 2026-02-19 05:38:23.530163 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-19 05:38:23.530169 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-19 05:38:23.530175 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-19 05:38:23.530192 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-19 05:38:23.530198 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-19 05:38:23.530204 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-19 05:38:23.530210 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 05:38:23.530216 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 05:38:23.530221 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 05:38:23.530227 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 05:38:23.530233 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 05:38:23.530251 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-19 05:38:23.530257 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-19 05:38:23.530265 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-19 05:38:23.530271 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-19 05:38:23.530277 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-19 05:38:23.530283 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-19 05:38:23.530295 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-19 05:38:23.530302 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 05:38:23.530308 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 05:38:23.530314 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 05:38:23.530319 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 05:38:23.530325 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 05:38:23.530331 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-19 05:38:23.530336 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 05:38:23.530342 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 05:38:23.530349 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 05:38:23.530356 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 05:38:23.530362 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 05:38:23.530368 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-19 05:38:23.530375 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 05:38:23.530382 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 05:38:23.530388 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 05:38:23.530395 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 05:38:23.530401 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 05:38:23.530408 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-19 05:38:23.530414 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-19 05:38:23.530421 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-19 05:38:23.530430 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-19 05:38:23.530441 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-19 05:38:23.530453 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-19 05:38:23.530473 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-19 05:38:23.530483 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-19 05:38:23.530516 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-19 05:38:23.530527 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-19 05:38:23.530535 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-19 05:38:23.530544 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-19 05:38:23.530569 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-19 05:41:11.128298 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-19 05:41:11.128425 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-19 05:41:11.128440 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-19 05:41:11.128450 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-19 05:41:11.128461 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-19 05:41:11.128470 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-19 05:41:11.128478 | orchestrator | 2026-02-19 05:41:11.128489 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 05:41:11.128498 | orchestrator | Thursday 19 February 2026 05:38:20 +0000 (0:00:21.087) 0:00:50.127 ***** 2026-02-19 05:41:11.128507 | orchestrator | 2026-02-19 05:41:11.128516 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 05:41:11.128524 | orchestrator | Thursday 19 February 2026 05:38:20 +0000 (0:00:00.459) 0:00:50.586 ***** 2026-02-19 05:41:11.128533 | orchestrator | 2026-02-19 05:41:11.128542 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 05:41:11.128551 | orchestrator | Thursday 19 February 2026 05:38:21 +0000 (0:00:00.464) 0:00:51.051 ***** 2026-02-19 05:41:11.128559 | orchestrator | 2026-02-19 05:41:11.128568 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 05:41:11.128576 | orchestrator | Thursday 19 February 2026 05:38:21 +0000 (0:00:00.432) 0:00:51.483 ***** 2026-02-19 05:41:11.128585 | orchestrator | 2026-02-19 05:41:11.128594 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 05:41:11.128602 | orchestrator | Thursday 19 February 2026 05:38:22 +0000 (0:00:00.433) 0:00:51.917 ***** 2026-02-19 05:41:11.128611 | orchestrator | 2026-02-19 05:41:11.128621 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-19 05:41:11.128629 | orchestrator | Thursday 19 February 2026 05:38:22 +0000 (0:00:00.454) 0:00:52.372 ***** 2026-02-19 05:41:11.128638 | orchestrator | 2026-02-19 05:41:11.128646 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-19 05:41:11.128655 | orchestrator | Thursday 19 February 2026 05:38:23 +0000 (0:00:00.804) 0:00:53.177 ***** 2026-02-19 05:41:11.128664 | orchestrator | 2026-02-19 05:41:11.128673 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-19 05:41:11.128682 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:41:11.128692 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:41:11.128701 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:41:11.128709 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:41:11.128718 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:41:11.128726 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:41:11.128735 | orchestrator | 2026-02-19 05:41:11.128744 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-19 05:41:11.128753 | orchestrator | 2026-02-19 05:41:11.128761 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-19 05:41:11.128770 | orchestrator | Thursday 19 February 2026 05:40:35 +0000 (0:02:11.758) 0:03:04.935 ***** 2026-02-19 05:41:11.128779 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:41:11.128788 | orchestrator | 2026-02-19 05:41:11.128866 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-19 05:41:11.128901 | orchestrator | Thursday 19 February 2026 05:40:37 +0000 (0:00:01.896) 0:03:06.832 ***** 2026-02-19 05:41:11.128912 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-19 05:41:11.128922 | orchestrator | 2026-02-19 05:41:11.128932 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-19 05:41:11.128942 | orchestrator | Thursday 19 February 2026 05:40:38 +0000 (0:00:01.834) 0:03:08.666 ***** 2026-02-19 05:41:11.128952 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.128963 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.128972 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.128982 | orchestrator | 2026-02-19 05:41:11.128992 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-19 05:41:11.129002 | orchestrator | Thursday 19 February 2026 05:40:40 +0000 (0:00:01.801) 0:03:10.468 ***** 2026-02-19 05:41:11.129026 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129036 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129045 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.129055 | orchestrator | 2026-02-19 05:41:11.129065 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-19 05:41:11.129075 | orchestrator | Thursday 19 February 2026 05:40:42 +0000 (0:00:01.421) 0:03:11.889 ***** 2026-02-19 05:41:11.129085 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129095 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129110 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.129124 | orchestrator | 2026-02-19 05:41:11.129146 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-19 05:41:11.129162 | orchestrator | Thursday 19 February 2026 05:40:43 +0000 (0:00:01.363) 0:03:13.252 ***** 2026-02-19 05:41:11.129176 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129189 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129203 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.129217 | orchestrator | 2026-02-19 05:41:11.129231 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-19 05:41:11.129245 | orchestrator | Thursday 19 February 2026 05:40:45 +0000 (0:00:01.519) 0:03:14.771 ***** 2026-02-19 05:41:11.129259 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129293 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129308 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.129321 | orchestrator | 2026-02-19 05:41:11.129334 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-19 05:41:11.129348 | orchestrator | Thursday 19 February 2026 05:40:46 +0000 (0:00:01.363) 0:03:16.135 ***** 2026-02-19 05:41:11.129363 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:41:11.129377 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:41:11.129392 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:41:11.129406 | orchestrator | 2026-02-19 05:41:11.129421 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-19 05:41:11.129435 | orchestrator | Thursday 19 February 2026 05:40:47 +0000 (0:00:01.312) 0:03:17.448 ***** 2026-02-19 05:41:11.129450 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129464 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129479 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.129493 | orchestrator | 2026-02-19 05:41:11.129508 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-19 05:41:11.129523 | orchestrator | Thursday 19 February 2026 05:40:49 +0000 (0:00:01.833) 0:03:19.282 ***** 2026-02-19 05:41:11.129537 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129551 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129566 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.129581 | orchestrator | 2026-02-19 05:41:11.129594 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-19 05:41:11.129609 | orchestrator | Thursday 19 February 2026 05:40:51 +0000 (0:00:01.549) 0:03:20.831 ***** 2026-02-19 05:41:11.129623 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129650 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129665 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.129679 | orchestrator | 2026-02-19 05:41:11.129689 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-19 05:41:11.129698 | orchestrator | Thursday 19 February 2026 05:40:52 +0000 (0:00:01.804) 0:03:22.636 ***** 2026-02-19 05:41:11.129707 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129715 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129724 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.129732 | orchestrator | 2026-02-19 05:41:11.129741 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-19 05:41:11.129750 | orchestrator | Thursday 19 February 2026 05:40:54 +0000 (0:00:01.343) 0:03:23.980 ***** 2026-02-19 05:41:11.129758 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:41:11.129767 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:41:11.129775 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:41:11.129784 | orchestrator | 2026-02-19 05:41:11.129793 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-19 05:41:11.129825 | orchestrator | Thursday 19 February 2026 05:40:55 +0000 (0:00:01.381) 0:03:25.361 ***** 2026-02-19 05:41:11.129834 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:41:11.129843 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:41:11.129851 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:41:11.129953 | orchestrator | 2026-02-19 05:41:11.129963 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-19 05:41:11.129972 | orchestrator | Thursday 19 February 2026 05:40:57 +0000 (0:00:01.390) 0:03:26.751 ***** 2026-02-19 05:41:11.129981 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.129989 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.129998 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.130006 | orchestrator | 2026-02-19 05:41:11.130071 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-19 05:41:11.130089 | orchestrator | Thursday 19 February 2026 05:40:58 +0000 (0:00:01.885) 0:03:28.637 ***** 2026-02-19 05:41:11.130104 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.130119 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.130145 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.130161 | orchestrator | 2026-02-19 05:41:11.130176 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-19 05:41:11.130191 | orchestrator | Thursday 19 February 2026 05:41:00 +0000 (0:00:01.530) 0:03:30.168 ***** 2026-02-19 05:41:11.130204 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.130213 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.130221 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.130230 | orchestrator | 2026-02-19 05:41:11.130238 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-19 05:41:11.130247 | orchestrator | Thursday 19 February 2026 05:41:02 +0000 (0:00:02.135) 0:03:32.303 ***** 2026-02-19 05:41:11.130255 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:41:11.130264 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:41:11.130273 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:41:11.130281 | orchestrator | 2026-02-19 05:41:11.130290 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-19 05:41:11.130298 | orchestrator | Thursday 19 February 2026 05:41:03 +0000 (0:00:01.390) 0:03:33.694 ***** 2026-02-19 05:41:11.130307 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:41:11.130315 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:41:11.130324 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:41:11.130333 | orchestrator | 2026-02-19 05:41:11.130351 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-19 05:41:11.130360 | orchestrator | Thursday 19 February 2026 05:41:05 +0000 (0:00:01.377) 0:03:35.071 ***** 2026-02-19 05:41:11.130368 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:41:11.130377 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:41:11.130385 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:41:11.130404 | orchestrator | 2026-02-19 05:41:11.130413 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-19 05:41:11.130422 | orchestrator | Thursday 19 February 2026 05:41:07 +0000 (0:00:01.644) 0:03:36.716 ***** 2026-02-19 05:41:11.130447 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372483 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372574 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372585 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372594 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372600 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:17.372666 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:17.372680 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:17.372693 | orchestrator | 2026-02-19 05:41:17.372701 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-19 05:41:17.372709 | orchestrator | Thursday 19 February 2026 05:41:11 +0000 (0:00:04.095) 0:03:40.811 ***** 2026-02-19 05:41:17.372715 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372722 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372738 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372745 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:17.372755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.740959 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741072 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:31.741099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:31.741157 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:31.741179 | orchestrator | 2026-02-19 05:41:31.741191 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-19 05:41:31.741203 | orchestrator | Thursday 19 February 2026 05:41:17 +0000 (0:00:06.242) 0:03:47.053 ***** 2026-02-19 05:41:31.741214 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-19 05:41:31.741224 | orchestrator | 2026-02-19 05:41:31.741234 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-19 05:41:31.741244 | orchestrator | Thursday 19 February 2026 05:41:19 +0000 (0:00:01.864) 0:03:48.918 ***** 2026-02-19 05:41:31.741254 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:41:31.741265 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:41:31.741316 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:41:31.741333 | orchestrator | 2026-02-19 05:41:31.741349 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-19 05:41:31.741365 | orchestrator | Thursday 19 February 2026 05:41:20 +0000 (0:00:01.726) 0:03:50.645 ***** 2026-02-19 05:41:31.741381 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:41:31.741396 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:41:31.741411 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:41:31.741427 | orchestrator | 2026-02-19 05:41:31.741443 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-19 05:41:31.741461 | orchestrator | Thursday 19 February 2026 05:41:23 +0000 (0:00:02.661) 0:03:53.306 ***** 2026-02-19 05:41:31.741479 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:41:31.741496 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:41:31.741514 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:41:31.741527 | orchestrator | 2026-02-19 05:41:31.741538 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-19 05:41:31.741549 | orchestrator | Thursday 19 February 2026 05:41:26 +0000 (0:00:02.844) 0:03:56.151 ***** 2026-02-19 05:41:31.741561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:31.741659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:36.231398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:36.231533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:41:36.231559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231566 | orchestrator | 2026-02-19 05:41:36.231575 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-19 05:41:36.231583 | orchestrator | Thursday 19 February 2026 05:41:31 +0000 (0:00:05.262) 0:04:01.413 ***** 2026-02-19 05:41:36.231590 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:41:36.231598 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:41:36.231605 | orchestrator | } 2026-02-19 05:41:36.231611 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:41:36.231617 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:41:36.231623 | orchestrator | } 2026-02-19 05:41:36.231630 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:41:36.231636 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:41:36.231642 | orchestrator | } 2026-02-19 05:41:36.231648 | orchestrator | 2026-02-19 05:41:36.231654 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-19 05:41:36.231661 | orchestrator | Thursday 19 February 2026 05:41:33 +0000 (0:00:01.392) 0:04:02.805 ***** 2026-02-19 05:41:36.231667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-19 05:41:36.231762 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-19 05:43:04.914388 | orchestrator | 2026-02-19 05:43:04.914509 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-19 05:43:04.914527 | orchestrator | Thursday 19 February 2026 05:41:36 +0000 (0:00:03.107) 0:04:05.913 ***** 2026-02-19 05:43:04.914540 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-19 05:43:04.914553 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-19 05:43:04.914564 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-19 05:43:04.914575 | orchestrator | 2026-02-19 05:43:04.914587 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-19 05:43:04.914599 | orchestrator | Thursday 19 February 2026 05:41:38 +0000 (0:00:01.964) 0:04:07.878 ***** 2026-02-19 05:43:04.914610 | orchestrator | changed: [testbed-node-0] => { 2026-02-19 05:43:04.914644 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:43:04.914667 | orchestrator | } 2026-02-19 05:43:04.914679 | orchestrator | changed: [testbed-node-1] => { 2026-02-19 05:43:04.914690 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:43:04.914701 | orchestrator | } 2026-02-19 05:43:04.914712 | orchestrator | changed: [testbed-node-2] => { 2026-02-19 05:43:04.914723 | orchestrator |  "msg": "Notifying handlers" 2026-02-19 05:43:04.914734 | orchestrator | } 2026-02-19 05:43:04.914745 | orchestrator | 2026-02-19 05:43:04.914756 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 05:43:04.914768 | orchestrator | Thursday 19 February 2026 05:41:39 +0000 (0:00:01.312) 0:04:09.191 ***** 2026-02-19 05:43:04.914779 | orchestrator | 2026-02-19 05:43:04.914790 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 05:43:04.914801 | orchestrator | Thursday 19 February 2026 05:41:39 +0000 (0:00:00.369) 0:04:09.561 ***** 2026-02-19 05:43:04.914812 | orchestrator | 2026-02-19 05:43:04.914824 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-19 05:43:04.914882 | orchestrator | Thursday 19 February 2026 05:41:40 +0000 (0:00:00.373) 0:04:09.934 ***** 2026-02-19 05:43:04.914894 | orchestrator | 2026-02-19 05:43:04.914905 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-19 05:43:04.914917 | orchestrator | Thursday 19 February 2026 05:41:41 +0000 (0:00:00.778) 0:04:10.712 ***** 2026-02-19 05:43:04.914928 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:43:04.914939 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:43:04.914950 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:43:04.914962 | orchestrator | 2026-02-19 05:43:04.914973 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-19 05:43:04.914999 | orchestrator | Thursday 19 February 2026 05:41:56 +0000 (0:00:15.961) 0:04:26.674 ***** 2026-02-19 05:43:04.915011 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:43:04.915022 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:43:04.915033 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:43:04.915044 | orchestrator | 2026-02-19 05:43:04.915055 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-19 05:43:04.915066 | orchestrator | Thursday 19 February 2026 05:42:13 +0000 (0:00:16.223) 0:04:42.898 ***** 2026-02-19 05:43:04.915077 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-19 05:43:04.915089 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-19 05:43:04.915123 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-19 05:43:04.915135 | orchestrator | 2026-02-19 05:43:04.915146 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-19 05:43:04.915157 | orchestrator | Thursday 19 February 2026 05:42:28 +0000 (0:00:15.330) 0:04:58.228 ***** 2026-02-19 05:43:04.915167 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:43:04.915190 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:43:04.915201 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:43:04.915212 | orchestrator | 2026-02-19 05:43:04.915223 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-19 05:43:04.915233 | orchestrator | Thursday 19 February 2026 05:42:44 +0000 (0:00:16.293) 0:05:14.522 ***** 2026-02-19 05:43:04.915244 | orchestrator | Pausing for 5 seconds 2026-02-19 05:43:04.915256 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:43:04.915267 | orchestrator | 2026-02-19 05:43:04.915278 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-19 05:43:04.915289 | orchestrator | Thursday 19 February 2026 05:42:50 +0000 (0:00:06.153) 0:05:20.676 ***** 2026-02-19 05:43:04.915299 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:43:04.915311 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:43:04.915322 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:43:04.915332 | orchestrator | 2026-02-19 05:43:04.915343 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-19 05:43:04.915354 | orchestrator | Thursday 19 February 2026 05:42:52 +0000 (0:00:01.800) 0:05:22.477 ***** 2026-02-19 05:43:04.915365 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:43:04.915376 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:43:04.915386 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:43:04.915397 | orchestrator | 2026-02-19 05:43:04.915408 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-19 05:43:04.915419 | orchestrator | Thursday 19 February 2026 05:42:54 +0000 (0:00:01.713) 0:05:24.190 ***** 2026-02-19 05:43:04.915429 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:43:04.915440 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:43:04.915451 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:43:04.915461 | orchestrator | 2026-02-19 05:43:04.915472 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-19 05:43:04.915483 | orchestrator | Thursday 19 February 2026 05:42:56 +0000 (0:00:01.847) 0:05:26.038 ***** 2026-02-19 05:43:04.915494 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:43:04.915504 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:43:04.915515 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:43:04.915526 | orchestrator | 2026-02-19 05:43:04.915537 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-19 05:43:04.915547 | orchestrator | Thursday 19 February 2026 05:42:58 +0000 (0:00:01.807) 0:05:27.845 ***** 2026-02-19 05:43:04.915558 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:43:04.915569 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:43:04.915579 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:43:04.915590 | orchestrator | 2026-02-19 05:43:04.915601 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-19 05:43:04.915629 | orchestrator | Thursday 19 February 2026 05:42:59 +0000 (0:00:01.760) 0:05:29.606 ***** 2026-02-19 05:43:04.915641 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:43:04.915651 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:43:04.915662 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:43:04.915673 | orchestrator | 2026-02-19 05:43:04.915684 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-19 05:43:04.915694 | orchestrator | Thursday 19 February 2026 05:43:01 +0000 (0:00:01.781) 0:05:31.387 ***** 2026-02-19 05:43:04.915705 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-19 05:43:04.915716 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-19 05:43:04.915727 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-19 05:43:04.915738 | orchestrator | 2026-02-19 05:43:04.915749 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 05:43:04.915768 | orchestrator | testbed-node-0 : ok=48  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-19 05:43:04.915781 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 05:43:04.915792 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-19 05:43:04.915803 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:43:04.915814 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:43:04.915824 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 05:43:04.915856 | orchestrator | 2026-02-19 05:43:04.915868 | orchestrator | 2026-02-19 05:43:04.915879 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 05:43:04.915889 | orchestrator | Thursday 19 February 2026 05:43:04 +0000 (0:00:02.836) 0:05:34.223 ***** 2026-02-19 05:43:04.915905 | orchestrator | =============================================================================== 2026-02-19 05:43:04.915916 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.76s 2026-02-19 05:43:04.915927 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.09s 2026-02-19 05:43:04.915938 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.29s 2026-02-19 05:43:04.915949 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.22s 2026-02-19 05:43:04.915959 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.96s 2026-02-19 05:43:04.915970 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 15.33s 2026-02-19 05:43:04.915980 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.24s 2026-02-19 05:43:04.915991 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.15s 2026-02-19 05:43:04.916001 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.26s 2026-02-19 05:43:04.916012 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.10s 2026-02-19 05:43:04.916022 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.66s 2026-02-19 05:43:04.916033 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.11s 2026-02-19 05:43:04.916044 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.05s 2026-02-19 05:43:04.916054 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.97s 2026-02-19 05:43:04.916065 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.84s 2026-02-19 05:43:04.916075 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.84s 2026-02-19 05:43:04.916086 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.79s 2026-02-19 05:43:04.916097 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.66s 2026-02-19 05:43:04.916107 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.65s 2026-02-19 05:43:04.916118 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.52s 2026-02-19 05:43:05.206979 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-19 05:43:05.207050 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-19 05:43:05.207057 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-19 05:43:05.214894 | orchestrator | + set -e 2026-02-19 05:43:05.214971 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 05:43:05.215087 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 05:43:05.215099 | orchestrator | ++ INTERACTIVE=false 2026-02-19 05:43:05.215103 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 05:43:05.215107 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 05:43:05.215120 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-19 05:43:07.318184 | orchestrator | 2026-02-19 05:43:07 | INFO  | Task c206b082-75de-46bd-a79f-d86c8e226f8d (ceph-rolling_update) was prepared for execution. 2026-02-19 05:43:07.318315 | orchestrator | 2026-02-19 05:43:07 | INFO  | It takes a moment until task c206b082-75de-46bd-a79f-d86c8e226f8d (ceph-rolling_update) has been started and output is visible here. 2026-02-19 05:44:28.347393 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-19 05:44:28.347479 | orchestrator | 2.16.14 2026-02-19 05:44:28.347489 | orchestrator | 2026-02-19 05:44:28.347495 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-19 05:44:28.347500 | orchestrator | 2026-02-19 05:44:28.347505 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-19 05:44:28.347511 | orchestrator | Thursday 19 February 2026 05:43:15 +0000 (0:00:01.563) 0:00:01.563 ***** 2026-02-19 05:44:28.347516 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-19 05:44:28.347521 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-19 05:44:28.347526 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-19 05:44:28.347531 | orchestrator | skipping: [localhost] 2026-02-19 05:44:28.347536 | orchestrator | 2026-02-19 05:44:28.347541 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-19 05:44:28.347545 | orchestrator | 2026-02-19 05:44:28.347550 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-19 05:44:28.347555 | orchestrator | Thursday 19 February 2026 05:43:17 +0000 (0:00:01.822) 0:00:03.386 ***** 2026-02-19 05:44:28.347559 | orchestrator | ok: [testbed-node-0] => { 2026-02-19 05:44:28.347564 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-19 05:44:28.347569 | orchestrator | } 2026-02-19 05:44:28.347574 | orchestrator | ok: [testbed-node-1] => { 2026-02-19 05:44:28.347579 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-19 05:44:28.347584 | orchestrator | } 2026-02-19 05:44:28.347588 | orchestrator | ok: [testbed-node-2] => { 2026-02-19 05:44:28.347593 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-19 05:44:28.347597 | orchestrator | } 2026-02-19 05:44:28.347602 | orchestrator | ok: [testbed-node-3] => { 2026-02-19 05:44:28.347606 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-19 05:44:28.347611 | orchestrator | } 2026-02-19 05:44:28.347615 | orchestrator | ok: [testbed-node-4] => { 2026-02-19 05:44:28.347620 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-19 05:44:28.347625 | orchestrator | } 2026-02-19 05:44:28.347629 | orchestrator | ok: [testbed-node-5] => { 2026-02-19 05:44:28.347634 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-19 05:44:28.347638 | orchestrator | } 2026-02-19 05:44:28.347643 | orchestrator | ok: [testbed-manager] => { 2026-02-19 05:44:28.347648 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-19 05:44:28.347652 | orchestrator | } 2026-02-19 05:44:28.347657 | orchestrator | 2026-02-19 05:44:28.347662 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-19 05:44:28.347666 | orchestrator | Thursday 19 February 2026 05:43:21 +0000 (0:00:04.189) 0:00:07.576 ***** 2026-02-19 05:44:28.347671 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:28.347676 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:28.347681 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:28.347685 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:28.347690 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:28.347711 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:28.347715 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.347720 | orchestrator | 2026-02-19 05:44:28.347725 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-19 05:44:28.347730 | orchestrator | Thursday 19 February 2026 05:43:27 +0000 (0:00:06.048) 0:00:13.625 ***** 2026-02-19 05:44:28.347734 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:44:28.347739 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:44:28.347743 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:44:28.347748 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:44:28.347753 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:44:28.347758 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:44:28.347762 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:44:28.347767 | orchestrator | 2026-02-19 05:44:28.347772 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-19 05:44:28.347776 | orchestrator | Thursday 19 February 2026 05:43:58 +0000 (0:00:31.103) 0:00:44.728 ***** 2026-02-19 05:44:28.347781 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.347785 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.347790 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.347795 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.347799 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.347804 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.347808 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.347813 | orchestrator | 2026-02-19 05:44:28.347817 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 05:44:28.347822 | orchestrator | Thursday 19 February 2026 05:44:00 +0000 (0:00:02.113) 0:00:46.841 ***** 2026-02-19 05:44:28.347827 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-19 05:44:28.347833 | orchestrator | 2026-02-19 05:44:28.347837 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 05:44:28.347907 | orchestrator | Thursday 19 February 2026 05:44:03 +0000 (0:00:02.551) 0:00:49.392 ***** 2026-02-19 05:44:28.347921 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.347928 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.347935 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.347942 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.347950 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.347957 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.347965 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.347973 | orchestrator | 2026-02-19 05:44:28.347996 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 05:44:28.348003 | orchestrator | Thursday 19 February 2026 05:44:05 +0000 (0:00:02.379) 0:00:51.772 ***** 2026-02-19 05:44:28.348008 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.348014 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.348019 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.348024 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.348029 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.348035 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.348040 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.348045 | orchestrator | 2026-02-19 05:44:28.348050 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 05:44:28.348055 | orchestrator | Thursday 19 February 2026 05:44:07 +0000 (0:00:01.957) 0:00:53.729 ***** 2026-02-19 05:44:28.348061 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.348066 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.348077 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.348082 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.348087 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.348092 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.348097 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.348103 | orchestrator | 2026-02-19 05:44:28.348108 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 05:44:28.348113 | orchestrator | Thursday 19 February 2026 05:44:10 +0000 (0:00:02.645) 0:00:56.374 ***** 2026-02-19 05:44:28.348118 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.348123 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.348128 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.348133 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.348138 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.348144 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.348149 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.348153 | orchestrator | 2026-02-19 05:44:28.348159 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 05:44:28.348164 | orchestrator | Thursday 19 February 2026 05:44:12 +0000 (0:00:02.107) 0:00:58.482 ***** 2026-02-19 05:44:28.348169 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.348174 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.348179 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.348184 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.348189 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.348194 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.348200 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.348205 | orchestrator | 2026-02-19 05:44:28.348210 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 05:44:28.348215 | orchestrator | Thursday 19 February 2026 05:44:14 +0000 (0:00:02.147) 0:01:00.629 ***** 2026-02-19 05:44:28.348249 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.348255 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.348263 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.348268 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.348273 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.348279 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.348284 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.348289 | orchestrator | 2026-02-19 05:44:28.348295 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 05:44:28.348300 | orchestrator | Thursday 19 February 2026 05:44:16 +0000 (0:00:01.872) 0:01:02.501 ***** 2026-02-19 05:44:28.348306 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:28.348311 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:28.348316 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:28.348320 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:28.348325 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:28.348330 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:28.348334 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:44:28.348339 | orchestrator | 2026-02-19 05:44:28.348343 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 05:44:28.348348 | orchestrator | Thursday 19 February 2026 05:44:18 +0000 (0:00:02.143) 0:01:04.645 ***** 2026-02-19 05:44:28.348353 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.348357 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.348362 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.348366 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.348371 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.348375 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.348380 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.348384 | orchestrator | 2026-02-19 05:44:28.348389 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 05:44:28.348393 | orchestrator | Thursday 19 February 2026 05:44:20 +0000 (0:00:02.037) 0:01:06.682 ***** 2026-02-19 05:44:28.348398 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:44:28.348403 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:44:28.348411 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:44:28.348416 | orchestrator | 2026-02-19 05:44:28.348420 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 05:44:28.348425 | orchestrator | Thursday 19 February 2026 05:44:22 +0000 (0:00:01.609) 0:01:08.292 ***** 2026-02-19 05:44:28.348429 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:28.348434 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:28.348438 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:28.348443 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:28.348447 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:28.348452 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:28.348456 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:28.348461 | orchestrator | 2026-02-19 05:44:28.348466 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 05:44:28.348470 | orchestrator | Thursday 19 February 2026 05:44:24 +0000 (0:00:02.052) 0:01:10.344 ***** 2026-02-19 05:44:28.348475 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:44:28.348479 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:44:28.348484 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:44:28.348489 | orchestrator | 2026-02-19 05:44:28.348493 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 05:44:28.348498 | orchestrator | Thursday 19 February 2026 05:44:27 +0000 (0:00:02.929) 0:01:13.273 ***** 2026-02-19 05:44:28.348506 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 05:44:49.877466 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 05:44:49.877572 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 05:44:49.877586 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:49.877597 | orchestrator | 2026-02-19 05:44:49.877608 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 05:44:49.877620 | orchestrator | Thursday 19 February 2026 05:44:28 +0000 (0:00:01.287) 0:01:14.561 ***** 2026-02-19 05:44:49.877632 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 05:44:49.877644 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 05:44:49.877655 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 05:44:49.877665 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:49.877674 | orchestrator | 2026-02-19 05:44:49.877685 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 05:44:49.877694 | orchestrator | Thursday 19 February 2026 05:44:30 +0000 (0:00:01.665) 0:01:16.226 ***** 2026-02-19 05:44:49.877750 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:44:49.877779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:44:49.877811 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:44:49.877822 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:49.877832 | orchestrator | 2026-02-19 05:44:49.877841 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 05:44:49.877886 | orchestrator | Thursday 19 February 2026 05:44:31 +0000 (0:00:01.109) 0:01:17.336 ***** 2026-02-19 05:44:49.877898 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd0a6e5ab4aac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 05:44:24.696632', 'end': '2026-02-19 05:44:24.754621', 'delta': '0:00:00.057989', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0a6e5ab4aac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 05:44:49.877930 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a8e499fc5d9a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 05:44:25.433030', 'end': '2026-02-19 05:44:25.477862', 'delta': '0:00:00.044832', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8e499fc5d9a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 05:44:49.877941 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7f7671ec0784', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 05:44:25.971306', 'end': '2026-02-19 05:44:26.036208', 'delta': '0:00:00.064902', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7f7671ec0784'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 05:44:49.877952 | orchestrator | 2026-02-19 05:44:49.877962 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 05:44:49.877971 | orchestrator | Thursday 19 February 2026 05:44:32 +0000 (0:00:01.132) 0:01:18.469 ***** 2026-02-19 05:44:49.877981 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:49.877991 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:49.878001 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:49.878062 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:49.878076 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:49.878086 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:49.878097 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:49.878108 | orchestrator | 2026-02-19 05:44:49.878127 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 05:44:49.878138 | orchestrator | Thursday 19 February 2026 05:44:34 +0000 (0:00:01.945) 0:01:20.414 ***** 2026-02-19 05:44:49.878149 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:49.878160 | orchestrator | 2026-02-19 05:44:49.878171 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 05:44:49.878182 | orchestrator | Thursday 19 February 2026 05:44:35 +0000 (0:00:01.216) 0:01:21.631 ***** 2026-02-19 05:44:49.878191 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:49.878206 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:49.878216 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:49.878226 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:49.878235 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:49.878245 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:49.878255 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:49.878264 | orchestrator | 2026-02-19 05:44:49.878274 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 05:44:49.878283 | orchestrator | Thursday 19 February 2026 05:44:37 +0000 (0:00:01.989) 0:01:23.620 ***** 2026-02-19 05:44:49.878293 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:49.878302 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:44:49.878312 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:44:49.878322 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:44:49.878331 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:44:49.878341 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:44:49.878350 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-19 05:44:49.878360 | orchestrator | 2026-02-19 05:44:49.878369 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 05:44:49.878379 | orchestrator | Thursday 19 February 2026 05:44:41 +0000 (0:00:03.788) 0:01:27.409 ***** 2026-02-19 05:44:49.878388 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:44:49.878398 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:44:49.878407 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:44:49.878417 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:44:49.878426 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:44:49.878435 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:44:49.878445 | orchestrator | ok: [testbed-manager] 2026-02-19 05:44:49.878454 | orchestrator | 2026-02-19 05:44:49.878464 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 05:44:49.878474 | orchestrator | Thursday 19 February 2026 05:44:43 +0000 (0:00:02.090) 0:01:29.500 ***** 2026-02-19 05:44:49.878483 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:49.878493 | orchestrator | 2026-02-19 05:44:49.878502 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 05:44:49.878512 | orchestrator | Thursday 19 February 2026 05:44:44 +0000 (0:00:01.118) 0:01:30.619 ***** 2026-02-19 05:44:49.878522 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:49.878531 | orchestrator | 2026-02-19 05:44:49.878541 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 05:44:49.878551 | orchestrator | Thursday 19 February 2026 05:44:45 +0000 (0:00:01.218) 0:01:31.837 ***** 2026-02-19 05:44:49.878560 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:49.878570 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:49.878580 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:49.878590 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:49.878599 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:49.878608 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:49.878618 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:44:49.878627 | orchestrator | 2026-02-19 05:44:49.878637 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 05:44:49.878646 | orchestrator | Thursday 19 February 2026 05:44:47 +0000 (0:00:02.323) 0:01:34.161 ***** 2026-02-19 05:44:49.878662 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:49.878672 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:49.878681 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:49.878691 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:49.878700 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:49.878709 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:49.878726 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:44:59.839248 | orchestrator | 2026-02-19 05:44:59.839378 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 05:44:59.839409 | orchestrator | Thursday 19 February 2026 05:44:49 +0000 (0:00:01.928) 0:01:36.090 ***** 2026-02-19 05:44:59.839429 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:59.839452 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:59.839468 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:59.839479 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:59.839490 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:59.839501 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:59.839512 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:44:59.839523 | orchestrator | 2026-02-19 05:44:59.839535 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 05:44:59.839546 | orchestrator | Thursday 19 February 2026 05:44:51 +0000 (0:00:01.996) 0:01:38.086 ***** 2026-02-19 05:44:59.839557 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:59.839568 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:59.839579 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:59.839589 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:59.839600 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:59.839611 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:59.839622 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:44:59.839632 | orchestrator | 2026-02-19 05:44:59.839644 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 05:44:59.839654 | orchestrator | Thursday 19 February 2026 05:44:53 +0000 (0:00:01.898) 0:01:39.985 ***** 2026-02-19 05:44:59.839665 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:59.839676 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:59.839687 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:59.839698 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:59.839708 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:59.839719 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:59.839730 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:44:59.839740 | orchestrator | 2026-02-19 05:44:59.839751 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 05:44:59.839762 | orchestrator | Thursday 19 February 2026 05:44:55 +0000 (0:00:02.013) 0:01:41.999 ***** 2026-02-19 05:44:59.839774 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:59.839787 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:59.839799 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:59.839810 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:59.839823 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:59.839883 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:59.839898 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:44:59.839911 | orchestrator | 2026-02-19 05:44:59.839923 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 05:44:59.839941 | orchestrator | Thursday 19 February 2026 05:44:57 +0000 (0:00:01.846) 0:01:43.846 ***** 2026-02-19 05:44:59.839959 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:44:59.839988 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:44:59.840006 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:44:59.840025 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:44:59.840042 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:44:59.840060 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:44:59.840077 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:44:59.840128 | orchestrator | 2026-02-19 05:44:59.840148 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 05:44:59.840167 | orchestrator | Thursday 19 February 2026 05:44:59 +0000 (0:00:02.077) 0:01:45.924 ***** 2026-02-19 05:44:59.840189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:44:59.840214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:44:59.840232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:44:59.840280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:44:59.840303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:44:59.840322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:44:59.840341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:44:59.840382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d17f80a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:44:59.840426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:44:59.840459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174309 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:45:00.174401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:45:00.174483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5b78108', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:00.174547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174574 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:45:00.174583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.174612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:45:00.174628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.501694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.501818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.502009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a13c58d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:00.502123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.502138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.502183 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:45:00.502224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.502239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'uuids': ['76bd5aba-0bb7-430d-953d-ee2f2591c83e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV']}})  2026-02-19 05:45:00.502271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50533a39', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:00.502284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159']}})  2026-02-19 05:45:00.502297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.502308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.502320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:45:00.502384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.510161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl', 'dm-uuid-CRYPT-LUKS2-96c3bdbb8dfb4f8d89601607ffc96021-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 05:45:00.510246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.510273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'uuids': ['96c3bdbb-8dfb-4f8d-8960-1607ffc96021'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl']}})  2026-02-19 05:45:00.510287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542']}})  2026-02-19 05:45:00.510300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.510331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23a82e55', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:00.510365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.510378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.510389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV', 'dm-uuid-CRYPT-LUKS2-76bd5aba0bb7430d953dee2f2591c83e-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 05:45:00.510401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.510413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'uuids': ['a59715b7-019c-4dda-9336-d3b7804a06c1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf']}})  2026-02-19 05:45:00.510432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85ad02dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:00.689205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02']}})  2026-02-19 05:45:00.689332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.689355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.689368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-20-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:45:00.689381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.689393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww', 'dm-uuid-CRYPT-LUKS2-f68538a13fa347dc9b85a13ec62262c1-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 05:45:00.689406 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:00.689419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.689472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'uuids': ['f68538a1-3fa3-47dc-9b85-a13ec62262c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww']}})  2026-02-19 05:45:00.689492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160']}})  2026-02-19 05:45:00.689505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.689522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '28e9d7a7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:00.689552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf', 'dm-uuid-CRYPT-LUKS2-a59715b7019c4dda9336d3b7804a06c1-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'uuids': ['ca7295e3-b0e7-43de-a68b-3daf29557592'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2']}})  2026-02-19 05:45:00.846685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '74afed04', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:00.846698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210']}})  2026-02-19 05:45:00.846733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:45:00.846793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM', 'dm-uuid-CRYPT-LUKS2-0386b2e9039d452a9d925bb7d9e8a516-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846817 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:00.846830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:00.846842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'uuids': ['0386b2e9-039d-452a-9d92-5bb7d9e8a516'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM']}})  2026-02-19 05:45:00.846909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456']}})  2026-02-19 05:45:00.846953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.022900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b283ac38', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:02.022997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2', 'dm-uuid-CRYPT-LUKS2-ca7295e3b0e743dea68b3daf29557592-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023057 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023066 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:02.023093 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023114 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:45:02.023119 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023125 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023130 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.023146 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b7aa0e34', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part16', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part14', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part15', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part1', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:45:02.156037 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.156154 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:45:02.156169 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:45:02.156181 | orchestrator | 2026-02-19 05:45:02.156191 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 05:45:02.156200 | orchestrator | Thursday 19 February 2026 05:45:02 +0000 (0:00:02.308) 0:01:48.233 ***** 2026-02-19 05:45:02.156211 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.156242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.156251 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.156261 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.156298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.156308 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.156317 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.156335 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d17f80a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.156357 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318108 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318215 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:45:02.318234 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318273 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318286 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318299 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318325 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318369 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318390 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318425 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5b78108', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318457 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.318493 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580178 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:45:02.580313 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580377 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580398 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580417 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580477 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580517 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580566 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580610 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a13c58d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580642 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580662 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.580693 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:45:02.580729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.706955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'uuids': ['76bd5aba-0bb7-430d-953d-ee2f2591c83e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707062 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50533a39', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707079 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl', 'dm-uuid-CRYPT-LUKS2-96c3bdbb8dfb4f8d89601607ffc96021-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'uuids': ['96c3bdbb-8dfb-4f8d-8960-1607ffc96021'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.707238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23a82e55', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV', 'dm-uuid-CRYPT-LUKS2-76bd5aba0bb7430d953dee2f2591c83e-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854541 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'uuids': ['a59715b7-019c-4dda-9336-d3b7804a06c1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854578 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85ad02dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.854611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955389 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955469 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'uuids': ['ca7295e3-b0e7-43de-a68b-3daf29557592'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '74afed04', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955528 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955535 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-20-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955557 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:02.955563 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww', 'dm-uuid-CRYPT-LUKS2-f68538a13fa347dc9b85a13ec62262c1-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.955577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999722 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'uuids': ['f68538a1-3fa3-47dc-9b85-a13ec62262c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999812 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999917 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM', 'dm-uuid-CRYPT-LUKS2-0386b2e9039d452a9d925bb7d9e8a516-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999951 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999959 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999966 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999983 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'uuids': ['0386b2e9-039d-452a-9d92-5bb7d9e8a516'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:02.999990 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.000002 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456']}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.091427 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.091537 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.091599 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.091614 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.091652 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b283ac38', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.091668 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.091698 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '28e9d7a7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:03.091718 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734514 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2', 'dm-uuid-CRYPT-LUKS2-ca7295e3b0e743dea68b3daf29557592-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734767 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b7aa0e34', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part16', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part14', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part15', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part1', 'scsi-SQEMU_QEMU_HARDDISK_b7aa0e34-9a3e-479c-b466-47f6ccb691a2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734799 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:06.734826 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf', 'dm-uuid-CRYPT-LUKS2-a59715b7019c4dda9336d3b7804a06c1-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734844 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734952 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:06.734969 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:45:06.734985 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:45:06.735000 | orchestrator | 2026-02-19 05:45:06.735018 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 05:45:06.735035 | orchestrator | Thursday 19 February 2026 05:45:04 +0000 (0:00:02.202) 0:01:50.436 ***** 2026-02-19 05:45:06.735050 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:45:06.735066 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:45:06.735082 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:45:06.735097 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:45:06.735110 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:45:06.735120 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:45:06.735130 | orchestrator | ok: [testbed-manager] 2026-02-19 05:45:06.735140 | orchestrator | 2026-02-19 05:45:06.735151 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 05:45:06.735179 | orchestrator | Thursday 19 February 2026 05:45:06 +0000 (0:00:02.507) 0:01:52.943 ***** 2026-02-19 05:45:37.233199 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:45:37.233342 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:45:37.233354 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:45:37.233361 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:45:37.233367 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:45:37.233374 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:45:37.233380 | orchestrator | ok: [testbed-manager] 2026-02-19 05:45:37.233387 | orchestrator | 2026-02-19 05:45:37.233395 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 05:45:37.233402 | orchestrator | Thursday 19 February 2026 05:45:08 +0000 (0:00:01.835) 0:01:54.779 ***** 2026-02-19 05:45:37.233409 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:45:37.233415 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:45:37.233422 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:45:37.233428 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:45:37.233434 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:45:37.233442 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:45:37.233448 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:45:37.233454 | orchestrator | 2026-02-19 05:45:37.233461 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 05:45:37.233467 | orchestrator | Thursday 19 February 2026 05:45:10 +0000 (0:00:02.387) 0:01:57.166 ***** 2026-02-19 05:45:37.233474 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:45:37.233480 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:45:37.233487 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:45:37.233493 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.233499 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:37.233505 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:37.233511 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:45:37.233517 | orchestrator | 2026-02-19 05:45:37.233524 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 05:45:37.233530 | orchestrator | Thursday 19 February 2026 05:45:13 +0000 (0:00:02.266) 0:01:59.432 ***** 2026-02-19 05:45:37.233536 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:45:37.233543 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:45:37.233549 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:45:37.233555 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.233561 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:37.233567 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:37.233573 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-19 05:45:37.233580 | orchestrator | 2026-02-19 05:45:37.233586 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 05:45:37.233593 | orchestrator | Thursday 19 February 2026 05:45:16 +0000 (0:00:02.995) 0:02:02.428 ***** 2026-02-19 05:45:37.233599 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:45:37.233605 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:45:37.233611 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:45:37.233618 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.233624 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:37.233630 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:37.233636 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:45:37.233642 | orchestrator | 2026-02-19 05:45:37.233683 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 05:45:37.233691 | orchestrator | Thursday 19 February 2026 05:45:18 +0000 (0:00:01.823) 0:02:04.251 ***** 2026-02-19 05:45:37.233697 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:45:37.233704 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-19 05:45:37.233710 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 05:45:37.233716 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-19 05:45:37.233723 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 05:45:37.233745 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 05:45:37.233752 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-19 05:45:37.233759 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-19 05:45:37.233766 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-19 05:45:37.233774 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-19 05:45:37.233781 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-19 05:45:37.233788 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 05:45:37.233795 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-19 05:45:37.233802 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-19 05:45:37.233809 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-19 05:45:37.233816 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-19 05:45:37.233823 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-19 05:45:37.233830 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-19 05:45:37.233837 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-19 05:45:37.233843 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-19 05:45:37.233870 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-19 05:45:37.233878 | orchestrator | 2026-02-19 05:45:37.233886 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 05:45:37.233893 | orchestrator | Thursday 19 February 2026 05:45:21 +0000 (0:00:03.077) 0:02:07.328 ***** 2026-02-19 05:45:37.233900 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 05:45:37.233907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 05:45:37.233914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 05:45:37.233921 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:45:37.233929 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 05:45:37.233936 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 05:45:37.233942 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 05:45:37.233950 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:45:37.233957 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 05:45:37.233964 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 05:45:37.233985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 05:45:37.233993 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:45:37.234000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 05:45:37.234007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 05:45:37.234014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 05:45:37.234073 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.234080 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 05:45:37.234087 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 05:45:37.234095 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 05:45:37.234102 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:37.234109 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 05:45:37.234116 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 05:45:37.234123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 05:45:37.234130 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:37.234136 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-19 05:45:37.234142 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-19 05:45:37.234148 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-19 05:45:37.234154 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:45:37.234167 | orchestrator | 2026-02-19 05:45:37.234174 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 05:45:37.234180 | orchestrator | Thursday 19 February 2026 05:45:23 +0000 (0:00:02.127) 0:02:09.456 ***** 2026-02-19 05:45:37.234187 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:45:37.234193 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:45:37.234199 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:45:37.234205 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:45:37.234212 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:45:37.234219 | orchestrator | 2026-02-19 05:45:37.234230 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 05:45:37.234238 | orchestrator | Thursday 19 February 2026 05:45:25 +0000 (0:00:01.928) 0:02:11.384 ***** 2026-02-19 05:45:37.234244 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.234250 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:37.234256 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:37.234262 | orchestrator | 2026-02-19 05:45:37.234269 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 05:45:37.234275 | orchestrator | Thursday 19 February 2026 05:45:26 +0000 (0:00:01.539) 0:02:12.924 ***** 2026-02-19 05:45:37.234281 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.234287 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:37.234293 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:37.234300 | orchestrator | 2026-02-19 05:45:37.234306 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 05:45:37.234312 | orchestrator | Thursday 19 February 2026 05:45:28 +0000 (0:00:01.427) 0:02:14.352 ***** 2026-02-19 05:45:37.234318 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.234324 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:45:37.234331 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:45:37.234337 | orchestrator | 2026-02-19 05:45:37.234343 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 05:45:37.234349 | orchestrator | Thursday 19 February 2026 05:45:29 +0000 (0:00:01.349) 0:02:15.701 ***** 2026-02-19 05:45:37.234355 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:45:37.234362 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:45:37.234368 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:45:37.234374 | orchestrator | 2026-02-19 05:45:37.234380 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 05:45:37.234387 | orchestrator | Thursday 19 February 2026 05:45:30 +0000 (0:00:01.417) 0:02:17.119 ***** 2026-02-19 05:45:37.234393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 05:45:37.234399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 05:45:37.234405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 05:45:37.234412 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.234418 | orchestrator | 2026-02-19 05:45:37.234424 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 05:45:37.234431 | orchestrator | Thursday 19 February 2026 05:45:32 +0000 (0:00:01.622) 0:02:18.742 ***** 2026-02-19 05:45:37.234437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 05:45:37.234443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 05:45:37.234449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 05:45:37.234455 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.234462 | orchestrator | 2026-02-19 05:45:37.234468 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 05:45:37.234474 | orchestrator | Thursday 19 February 2026 05:45:34 +0000 (0:00:01.650) 0:02:20.393 ***** 2026-02-19 05:45:37.234480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 05:45:37.234491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 05:45:37.234497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 05:45:37.234503 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:45:37.234510 | orchestrator | 2026-02-19 05:45:37.234516 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 05:45:37.234522 | orchestrator | Thursday 19 February 2026 05:45:35 +0000 (0:00:01.627) 0:02:22.020 ***** 2026-02-19 05:45:37.234528 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:45:37.234534 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:45:37.234541 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:45:37.234547 | orchestrator | 2026-02-19 05:45:37.234554 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 05:45:37.234571 | orchestrator | Thursday 19 February 2026 05:45:37 +0000 (0:00:01.411) 0:02:23.432 ***** 2026-02-19 05:46:21.476119 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 05:46:21.476222 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 05:46:21.476233 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 05:46:21.476242 | orchestrator | 2026-02-19 05:46:21.476250 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 05:46:21.476259 | orchestrator | Thursday 19 February 2026 05:45:38 +0000 (0:00:01.523) 0:02:24.956 ***** 2026-02-19 05:46:21.476267 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:46:21.476275 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:46:21.476284 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:46:21.476291 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:46:21.476299 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:46:21.476306 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:46:21.476314 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:46:21.476321 | orchestrator | 2026-02-19 05:46:21.476329 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 05:46:21.476336 | orchestrator | Thursday 19 February 2026 05:45:40 +0000 (0:00:02.022) 0:02:26.978 ***** 2026-02-19 05:46:21.476343 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:46:21.476350 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:46:21.476357 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:46:21.476364 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:46:21.476385 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:46:21.476393 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:46:21.476400 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:46:21.476407 | orchestrator | 2026-02-19 05:46:21.476414 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-19 05:46:21.476422 | orchestrator | Thursday 19 February 2026 05:45:43 +0000 (0:00:02.833) 0:02:29.812 ***** 2026-02-19 05:46:21.476429 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:46:21.476437 | orchestrator | changed: [testbed-manager] 2026-02-19 05:46:21.476444 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:46:21.476451 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:46:21.476458 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:46:21.476465 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:46:21.476472 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:46:21.476480 | orchestrator | 2026-02-19 05:46:21.476487 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-19 05:46:21.476513 | orchestrator | Thursday 19 February 2026 05:45:52 +0000 (0:00:08.520) 0:02:38.333 ***** 2026-02-19 05:46:21.476521 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.476528 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.476535 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.476542 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.476549 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.476556 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.476563 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.476570 | orchestrator | 2026-02-19 05:46:21.476578 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-19 05:46:21.476585 | orchestrator | Thursday 19 February 2026 05:45:54 +0000 (0:00:01.975) 0:02:40.308 ***** 2026-02-19 05:46:21.476592 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.476599 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.476606 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.476613 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.476620 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.476627 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.476634 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.476641 | orchestrator | 2026-02-19 05:46:21.476650 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-19 05:46:21.476658 | orchestrator | Thursday 19 February 2026 05:45:55 +0000 (0:00:01.855) 0:02:42.164 ***** 2026-02-19 05:46:21.476667 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.476675 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:46:21.476683 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:46:21.476692 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:46:21.476700 | orchestrator | changed: [testbed-node-3] 2026-02-19 05:46:21.476708 | orchestrator | changed: [testbed-node-4] 2026-02-19 05:46:21.476716 | orchestrator | changed: [testbed-node-5] 2026-02-19 05:46:21.476724 | orchestrator | 2026-02-19 05:46:21.476732 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-19 05:46:21.476741 | orchestrator | Thursday 19 February 2026 05:45:58 +0000 (0:00:02.983) 0:02:45.148 ***** 2026-02-19 05:46:21.476749 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-19 05:46:21.476758 | orchestrator | 2026-02-19 05:46:21.476766 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-19 05:46:21.476773 | orchestrator | Thursday 19 February 2026 05:46:01 +0000 (0:00:02.809) 0:02:47.958 ***** 2026-02-19 05:46:21.476780 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.476787 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.476794 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.476801 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.476808 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.476829 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.476837 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.476844 | orchestrator | 2026-02-19 05:46:21.476851 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-19 05:46:21.476879 | orchestrator | Thursday 19 February 2026 05:46:03 +0000 (0:00:01.916) 0:02:49.874 ***** 2026-02-19 05:46:21.476888 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.476895 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.476902 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.476909 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.476916 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.476923 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.476930 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.476937 | orchestrator | 2026-02-19 05:46:21.476944 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-19 05:46:21.476951 | orchestrator | Thursday 19 February 2026 05:46:05 +0000 (0:00:02.022) 0:02:51.897 ***** 2026-02-19 05:46:21.476965 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.476972 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.476979 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.476986 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.476993 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.477000 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.477007 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.477014 | orchestrator | 2026-02-19 05:46:21.477021 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-19 05:46:21.477028 | orchestrator | Thursday 19 February 2026 05:46:07 +0000 (0:00:01.879) 0:02:53.776 ***** 2026-02-19 05:46:21.477036 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.477043 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.477050 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.477057 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.477063 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.477071 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.477078 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.477086 | orchestrator | 2026-02-19 05:46:21.477106 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-19 05:46:21.477125 | orchestrator | Thursday 19 February 2026 05:46:09 +0000 (0:00:01.874) 0:02:55.651 ***** 2026-02-19 05:46:21.477137 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.477148 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.477159 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.477170 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.477179 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.477190 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.477200 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.477211 | orchestrator | 2026-02-19 05:46:21.477221 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-19 05:46:21.477232 | orchestrator | Thursday 19 February 2026 05:46:11 +0000 (0:00:01.836) 0:02:57.488 ***** 2026-02-19 05:46:21.477243 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.477255 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.477267 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.477278 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.477289 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.477300 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.477312 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.477323 | orchestrator | 2026-02-19 05:46:21.477336 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-19 05:46:21.477347 | orchestrator | Thursday 19 February 2026 05:46:13 +0000 (0:00:02.012) 0:02:59.500 ***** 2026-02-19 05:46:21.477359 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.477369 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.477381 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.477391 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.477402 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.477413 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.477423 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.477434 | orchestrator | 2026-02-19 05:46:21.477446 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-19 05:46:21.477457 | orchestrator | Thursday 19 February 2026 05:46:15 +0000 (0:00:02.049) 0:03:01.549 ***** 2026-02-19 05:46:21.477469 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.477482 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.477493 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.477505 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.477516 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.477528 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.477549 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.477561 | orchestrator | 2026-02-19 05:46:21.477574 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-19 05:46:21.477585 | orchestrator | Thursday 19 February 2026 05:46:17 +0000 (0:00:02.419) 0:03:03.969 ***** 2026-02-19 05:46:21.477597 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.477609 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.477621 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.477628 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.477635 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.477642 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.477649 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.477656 | orchestrator | 2026-02-19 05:46:21.477664 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-19 05:46:21.477671 | orchestrator | Thursday 19 February 2026 05:46:19 +0000 (0:00:02.054) 0:03:06.024 ***** 2026-02-19 05:46:21.477678 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:21.477685 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:21.477692 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:21.477699 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:21.477706 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:21.477713 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:21.477720 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:21.477727 | orchestrator | 2026-02-19 05:46:21.477735 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-19 05:46:21.477752 | orchestrator | Thursday 19 February 2026 05:46:21 +0000 (0:00:01.656) 0:03:07.680 ***** 2026-02-19 05:46:44.997460 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.997580 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.997595 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.997606 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.997617 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.997628 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.997639 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.997651 | orchestrator | 2026-02-19 05:46:44.997664 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-19 05:46:44.997676 | orchestrator | Thursday 19 February 2026 05:46:23 +0000 (0:00:01.711) 0:03:09.392 ***** 2026-02-19 05:46:44.997687 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.997698 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.997709 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.997719 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.997730 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.997740 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.997751 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.997762 | orchestrator | 2026-02-19 05:46:44.997773 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-19 05:46:44.997784 | orchestrator | Thursday 19 February 2026 05:46:24 +0000 (0:00:01.644) 0:03:11.037 ***** 2026-02-19 05:46:44.997795 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.997806 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.997817 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.997829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 05:46:44.997842 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 05:46:44.997853 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.997919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 05:46:44.997934 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 05:46:44.997969 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.997983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 05:46:44.997995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 05:46:44.998009 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.998088 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.998107 | orchestrator | 2026-02-19 05:46:44.998127 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-19 05:46:44.998159 | orchestrator | Thursday 19 February 2026 05:46:26 +0000 (0:00:01.974) 0:03:13.012 ***** 2026-02-19 05:46:44.998178 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.998199 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.998212 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.998224 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.998236 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.998248 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.998261 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.998272 | orchestrator | 2026-02-19 05:46:44.998285 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-19 05:46:44.998297 | orchestrator | Thursday 19 February 2026 05:46:28 +0000 (0:00:01.860) 0:03:14.873 ***** 2026-02-19 05:46:44.998309 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.998323 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.998333 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.998343 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.998354 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.998364 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.998375 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.998385 | orchestrator | 2026-02-19 05:46:44.998397 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-19 05:46:44.998408 | orchestrator | Thursday 19 February 2026 05:46:30 +0000 (0:00:02.067) 0:03:16.941 ***** 2026-02-19 05:46:44.998418 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.998429 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.998439 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.998450 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.998460 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.998470 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.998481 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.998491 | orchestrator | 2026-02-19 05:46:44.998502 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-19 05:46:44.998513 | orchestrator | Thursday 19 February 2026 05:46:32 +0000 (0:00:01.753) 0:03:18.694 ***** 2026-02-19 05:46:44.998523 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.998534 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.998544 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.998555 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.998565 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.998576 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.998586 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.998597 | orchestrator | 2026-02-19 05:46:44.998608 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-19 05:46:44.998618 | orchestrator | Thursday 19 February 2026 05:46:34 +0000 (0:00:02.019) 0:03:20.713 ***** 2026-02-19 05:46:44.998629 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.998639 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.998669 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.998680 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.998706 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.998719 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.998736 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.998751 | orchestrator | 2026-02-19 05:46:44.998762 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-19 05:46:44.998773 | orchestrator | Thursday 19 February 2026 05:46:36 +0000 (0:00:01.877) 0:03:22.590 ***** 2026-02-19 05:46:44.998783 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.998794 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.998804 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.998815 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.998825 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.998836 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.998846 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.998857 | orchestrator | 2026-02-19 05:46:44.998956 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-19 05:46:44.998968 | orchestrator | Thursday 19 February 2026 05:46:38 +0000 (0:00:01.835) 0:03:24.426 ***** 2026-02-19 05:46:44.998979 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:46:44.998989 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:46:44.999000 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:46:44.999010 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:46:44.999022 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:46:44.999033 | orchestrator | 2026-02-19 05:46:44.999043 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-19 05:46:44.999054 | orchestrator | Thursday 19 February 2026 05:46:40 +0000 (0:00:02.422) 0:03:26.848 ***** 2026-02-19 05:46:44.999065 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:46:44.999077 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:46:44.999087 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:46:44.999098 | orchestrator | 2026-02-19 05:46:44.999108 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-19 05:46:44.999126 | orchestrator | Thursday 19 February 2026 05:46:41 +0000 (0:00:01.312) 0:03:28.161 ***** 2026-02-19 05:46:44.999138 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 05:46:44.999149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 05:46:44.999160 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.999171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 05:46:44.999182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 05:46:44.999192 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:44.999203 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 05:46:44.999214 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 05:46:44.999224 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:44.999235 | orchestrator | 2026-02-19 05:46:44.999246 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-19 05:46:44.999256 | orchestrator | Thursday 19 February 2026 05:46:43 +0000 (0:00:01.400) 0:03:29.562 ***** 2026-02-19 05:46:44.999269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:44.999291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:44.999303 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:44.999314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:44.999325 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:44.999344 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:52.730506 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:52.730633 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:52.730680 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:52.730697 | orchestrator | 2026-02-19 05:46:52.730713 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-19 05:46:52.730728 | orchestrator | Thursday 19 February 2026 05:46:44 +0000 (0:00:01.643) 0:03:31.205 ***** 2026-02-19 05:46:52.730737 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:52.730745 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:52.730754 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:52.730762 | orchestrator | 2026-02-19 05:46:52.730771 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-19 05:46:52.730780 | orchestrator | Thursday 19 February 2026 05:46:46 +0000 (0:00:01.390) 0:03:32.596 ***** 2026-02-19 05:46:52.730788 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:52.730795 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:52.730803 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:52.730811 | orchestrator | 2026-02-19 05:46:52.730819 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-19 05:46:52.730842 | orchestrator | Thursday 19 February 2026 05:46:47 +0000 (0:00:01.351) 0:03:33.947 ***** 2026-02-19 05:46:52.730851 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:52.730859 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:52.730889 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:52.730898 | orchestrator | 2026-02-19 05:46:52.730917 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-19 05:46:52.730925 | orchestrator | Thursday 19 February 2026 05:46:49 +0000 (0:00:01.327) 0:03:35.274 ***** 2026-02-19 05:46:52.730933 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:52.730941 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:52.730958 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:52.730966 | orchestrator | 2026-02-19 05:46:52.730974 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-19 05:46:52.731002 | orchestrator | Thursday 19 February 2026 05:46:50 +0000 (0:00:01.285) 0:03:36.560 ***** 2026-02-19 05:46:52.731011 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}) 2026-02-19 05:46:52.731021 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}) 2026-02-19 05:46:52.731030 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}) 2026-02-19 05:46:52.731039 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}) 2026-02-19 05:46:52.731049 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}) 2026-02-19 05:46:52.731058 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}) 2026-02-19 05:46:52.731067 | orchestrator | 2026-02-19 05:46:52.731076 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-19 05:46:52.731086 | orchestrator | Thursday 19 February 2026 05:46:52 +0000 (0:00:02.138) 0:03:38.698 ***** 2026-02-19 05:46:52.731118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159/osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1771472613.6507092, 'mtime': 1771472613.6447089, 'ctime': 1771472613.6447089, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159/osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:52.731137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-900578fb-6201-5328-bc2d-5e3d92afe542/osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1771472634.0250683, 'mtime': 1771472634.0220683, 'ctime': 1771472634.0220683, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-900578fb-6201-5328-bc2d-5e3d92afe542/osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:52.731154 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:52.731163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02/osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 953, 'dev': 6, 'nlink': 1, 'atime': 1771472609.7632835, 'mtime': 1771472609.7582834, 'ctime': 1771472609.7582834, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02/osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:52.731180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160/osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 963, 'dev': 6, 'nlink': 1, 'atime': 1771472630.1326418, 'mtime': 1771472630.1286418, 'ctime': 1771472630.1286418, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160/osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.314378 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:58.314536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-98b2861f-503b-5d91-adc9-6468e68ac210/osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1771472610.9116158, 'mtime': 1771472610.9086158, 'ctime': 1771472610.9086158, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-98b2861f-503b-5d91-adc9-6468e68ac210/osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.314599 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-3bb39c06-9317-5e70-9108-eeec2efc4456/osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1771472635.0070477, 'mtime': 1771472635.0040474, 'ctime': 1771472635.0040474, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-3bb39c06-9317-5e70-9108-eeec2efc4456/osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.314623 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:58.314642 | orchestrator | 2026-02-19 05:46:58.314661 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-19 05:46:58.314682 | orchestrator | Thursday 19 February 2026 05:46:53 +0000 (0:00:01.397) 0:03:40.096 ***** 2026-02-19 05:46:58.314702 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 05:46:58.314718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 05:46:58.314729 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:58.314740 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 05:46:58.314751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 05:46:58.314762 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:58.314773 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 05:46:58.314783 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 05:46:58.314794 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:58.314805 | orchestrator | 2026-02-19 05:46:58.314816 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-19 05:46:58.314848 | orchestrator | Thursday 19 February 2026 05:46:55 +0000 (0:00:01.324) 0:03:41.420 ***** 2026-02-19 05:46:58.314897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.314912 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.314935 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:58.314948 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.314969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.314982 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:58.314994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.315007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.315019 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:58.315032 | orchestrator | 2026-02-19 05:46:58.315044 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-19 05:46:58.315057 | orchestrator | Thursday 19 February 2026 05:46:56 +0000 (0:00:01.385) 0:03:42.806 ***** 2026-02-19 05:46:58.315069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'})  2026-02-19 05:46:58.315082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'})  2026-02-19 05:46:58.315094 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:46:58.315107 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'})  2026-02-19 05:46:58.315119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'})  2026-02-19 05:46:58.315130 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:46:58.315144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'})  2026-02-19 05:46:58.315165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'})  2026-02-19 05:46:58.315186 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:46:58.315207 | orchestrator | 2026-02-19 05:46:58.315229 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-19 05:46:58.315251 | orchestrator | Thursday 19 February 2026 05:46:58 +0000 (0:00:01.611) 0:03:44.418 ***** 2026-02-19 05:46:58.315273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-dc132c82-2da4-526a-8d14-ac4e81fe1159', 'data_vg': 'ceph-dc132c82-2da4-526a-8d14-ac4e81fe1159'}, 'ansible_loop_var': 'item'})  2026-02-19 05:46:58.315304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-900578fb-6201-5328-bc2d-5e3d92afe542', 'data_vg': 'ceph-900578fb-6201-5328-bc2d-5e3d92afe542'}, 'ansible_loop_var': 'item'})  2026-02-19 05:47:07.294168 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:07.294258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-64a1f4ab-0c55-53ad-929a-fda4cfe46a02', 'data_vg': 'ceph-64a1f4ab-0c55-53ad-929a-fda4cfe46a02'}, 'ansible_loop_var': 'item'})  2026-02-19 05:47:07.294270 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160', 'data_vg': 'ceph-ac535f4d-dfa1-5efd-bfb5-368e6c7a2160'}, 'ansible_loop_var': 'item'})  2026-02-19 05:47:07.294277 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:07.294297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-98b2861f-503b-5d91-adc9-6468e68ac210', 'data_vg': 'ceph-98b2861f-503b-5d91-adc9-6468e68ac210'}, 'ansible_loop_var': 'item'})  2026-02-19 05:47:07.294305 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-3bb39c06-9317-5e70-9108-eeec2efc4456', 'data_vg': 'ceph-3bb39c06-9317-5e70-9108-eeec2efc4456'}, 'ansible_loop_var': 'item'})  2026-02-19 05:47:07.294311 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:07.294318 | orchestrator | 2026-02-19 05:47:07.294326 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-19 05:47:07.294333 | orchestrator | Thursday 19 February 2026 05:46:59 +0000 (0:00:01.380) 0:03:45.798 ***** 2026-02-19 05:47:07.294340 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:07.294346 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:07.294352 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:07.294358 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:07.294365 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:07.294371 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:07.294377 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:07.294383 | orchestrator | 2026-02-19 05:47:07.294389 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-19 05:47:07.294395 | orchestrator | Thursday 19 February 2026 05:47:01 +0000 (0:00:01.886) 0:03:47.685 ***** 2026-02-19 05:47:07.294402 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:07.294408 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:07.294414 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:07.294420 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:07.294426 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 05:47:07.294433 | orchestrator | 2026-02-19 05:47:07.294439 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-19 05:47:07.294445 | orchestrator | Thursday 19 February 2026 05:47:03 +0000 (0:00:02.436) 0:03:50.122 ***** 2026-02-19 05:47:07.294452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294503 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:07.294509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294540 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:07.294595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294651 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:07.294657 | orchestrator | 2026-02-19 05:47:07.294664 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-19 05:47:07.294670 | orchestrator | Thursday 19 February 2026 05:47:05 +0000 (0:00:01.405) 0:03:51.527 ***** 2026-02-19 05:47:07.294676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294712 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:07.294718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294756 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:07.294762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294793 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:07.294799 | orchestrator | 2026-02-19 05:47:07.294806 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-19 05:47:07.294812 | orchestrator | Thursday 19 February 2026 05:47:07 +0000 (0:00:01.746) 0:03:53.273 ***** 2026-02-19 05:47:07.294818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294849 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:07.294855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:07.294935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:23.465758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:23.465919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:23.465937 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:23.465951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:23.465963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:23.465974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:23.465985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:23.466073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 05:47:23.466102 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:23.466124 | orchestrator | 2026-02-19 05:47:23.466138 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-19 05:47:23.466150 | orchestrator | Thursday 19 February 2026 05:47:08 +0000 (0:00:01.393) 0:03:54.667 ***** 2026-02-19 05:47:23.466185 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:23.466197 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:23.466208 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:23.466218 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:23.466229 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:23.466240 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:23.466250 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:23.466261 | orchestrator | 2026-02-19 05:47:23.466272 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-19 05:47:23.466282 | orchestrator | Thursday 19 February 2026 05:47:10 +0000 (0:00:01.845) 0:03:56.512 ***** 2026-02-19 05:47:23.466295 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:23.466308 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:23.466319 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:23.466331 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:23.466343 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:23.466355 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:23.466367 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:23.466379 | orchestrator | 2026-02-19 05:47:23.466391 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-19 05:47:23.466403 | orchestrator | Thursday 19 February 2026 05:47:12 +0000 (0:00:02.111) 0:03:58.624 ***** 2026-02-19 05:47:23.466416 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:23.466428 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:23.466440 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:23.466450 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:23.466461 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:23.466472 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:23.466482 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:23.466493 | orchestrator | 2026-02-19 05:47:23.466504 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-19 05:47:23.466515 | orchestrator | Thursday 19 February 2026 05:47:14 +0000 (0:00:02.019) 0:04:00.643 ***** 2026-02-19 05:47:23.466526 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:23.466536 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:23.466547 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:23.466557 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:23.466568 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:23.466579 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:23.466589 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:23.466600 | orchestrator | 2026-02-19 05:47:23.466611 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-19 05:47:23.466621 | orchestrator | Thursday 19 February 2026 05:47:16 +0000 (0:00:01.979) 0:04:02.623 ***** 2026-02-19 05:47:23.466632 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:23.466643 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:23.466653 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:23.466664 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:23.466674 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:23.466685 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:23.466695 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:23.466706 | orchestrator | 2026-02-19 05:47:23.466717 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-19 05:47:23.466728 | orchestrator | Thursday 19 February 2026 05:47:18 +0000 (0:00:01.957) 0:04:04.581 ***** 2026-02-19 05:47:23.466739 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:23.466749 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:23.466760 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:23.466770 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:23.466781 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:23.466791 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:23.466809 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:23.466820 | orchestrator | 2026-02-19 05:47:23.466831 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-19 05:47:23.466842 | orchestrator | Thursday 19 February 2026 05:47:20 +0000 (0:00:01.994) 0:04:06.575 ***** 2026-02-19 05:47:23.466852 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:23.466863 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:23.466945 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:23.466964 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:23.466982 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:23.466999 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:23.467018 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:23.467035 | orchestrator | 2026-02-19 05:47:23.467077 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-19 05:47:23.467095 | orchestrator | Thursday 19 February 2026 05:47:22 +0000 (0:00:02.207) 0:04:08.782 ***** 2026-02-19 05:47:23.467107 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:23.467119 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:23.467132 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:23.467145 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:23.467165 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:23.467179 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:23.467190 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:23.467201 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:23.467212 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:23.467222 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:23.467233 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:23.467244 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:23.467255 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:23.467266 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:23.467277 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:23.467288 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:23.467308 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:23.467319 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:23.467336 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:23.467351 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:23.467362 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:23.467373 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:23.467384 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:23.467402 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:27.632419 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:27.632592 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:27.632621 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:27.632642 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:27.632662 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:27.632706 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:27.632729 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:27.632749 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:27.632769 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:27.632786 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:27.632805 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:27.632826 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:27.632843 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:27.632931 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:27.632955 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:27.632974 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:27.632995 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:27.633016 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:27.633035 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:27.633054 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:27.633075 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:27.633097 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:27.633117 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:27.633137 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:27.633159 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:27.633180 | orchestrator | 2026-02-19 05:47:27.633231 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-19 05:47:27.633254 | orchestrator | Thursday 19 February 2026 05:47:24 +0000 (0:00:02.372) 0:04:11.155 ***** 2026-02-19 05:47:27.633272 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:27.633290 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:27.633308 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:27.633327 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:27.633346 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:27.633362 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:27.633383 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:27.633401 | orchestrator | 2026-02-19 05:47:27.633419 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-19 05:47:27.633439 | orchestrator | Thursday 19 February 2026 05:47:26 +0000 (0:00:02.061) 0:04:13.217 ***** 2026-02-19 05:47:27.633458 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:27.633490 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:27.633511 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:27.633531 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:27.633573 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:27.633595 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:27.633614 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:27.633633 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:27.633653 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:27.633673 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:27.633693 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:27.633709 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:27.633721 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:27.633731 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:27.633742 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:27.633753 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:27.633763 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:27.633774 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:27.633785 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:27.633796 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:27.633806 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:27.633831 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:45.677774 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:45.677969 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:45.677990 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:45.678090 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:45.678120 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:45.678135 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:45.678147 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:45.678158 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:45.678169 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:45.678180 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:45.678191 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:45.678202 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:45.678214 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:45.678227 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:45.678238 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:45.678249 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:45.678260 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:45.678271 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-19 05:47:45.678282 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-19 05:47:45.678293 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-19 05:47:45.678304 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-19 05:47:45.678315 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:45.678325 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:45.678336 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:45.678378 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-19 05:47:45.678390 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-19 05:47:45.678401 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:45.678413 | orchestrator | 2026-02-19 05:47:45.678425 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-19 05:47:45.678438 | orchestrator | Thursday 19 February 2026 05:47:28 +0000 (0:00:01.963) 0:04:15.180 ***** 2026-02-19 05:47:45.678448 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:45.678459 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:45.678470 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:45.678481 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:45.678492 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:45.678503 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:45.678514 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:45.678524 | orchestrator | 2026-02-19 05:47:45.678540 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-19 05:47:45.678551 | orchestrator | Thursday 19 February 2026 05:47:30 +0000 (0:00:02.039) 0:04:17.220 ***** 2026-02-19 05:47:45.678563 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:45.678573 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:45.678584 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:45.678595 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:45.678605 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:45.678616 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:45.678627 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:45.678638 | orchestrator | 2026-02-19 05:47:45.678649 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-19 05:47:45.678659 | orchestrator | Thursday 19 February 2026 05:47:32 +0000 (0:00:01.698) 0:04:18.918 ***** 2026-02-19 05:47:45.678670 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:45.678681 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:45.678692 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:45.678702 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:45.678713 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:45.678724 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:45.678734 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:45.678745 | orchestrator | 2026-02-19 05:47:45.678756 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-19 05:47:45.678767 | orchestrator | Thursday 19 February 2026 05:47:34 +0000 (0:00:01.853) 0:04:20.772 ***** 2026-02-19 05:47:45.678778 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-19 05:47:45.678791 | orchestrator | 2026-02-19 05:47:45.678803 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-19 05:47:45.678813 | orchestrator | Thursday 19 February 2026 05:47:36 +0000 (0:00:02.388) 0:04:23.161 ***** 2026-02-19 05:47:45.678825 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-19 05:47:45.678836 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-19 05:47:45.678847 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-19 05:47:45.678858 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-19 05:47:45.678868 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-19 05:47:45.678909 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-19 05:47:45.678927 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-19 05:47:45.678938 | orchestrator | 2026-02-19 05:47:45.678949 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-19 05:47:45.678960 | orchestrator | Thursday 19 February 2026 05:47:38 +0000 (0:00:02.017) 0:04:25.179 ***** 2026-02-19 05:47:45.678971 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:45.678982 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:45.678992 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:45.679003 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:45.679014 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:45.679024 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:45.679035 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:45.679046 | orchestrator | 2026-02-19 05:47:45.679057 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-19 05:47:45.679067 | orchestrator | Thursday 19 February 2026 05:47:40 +0000 (0:00:02.025) 0:04:27.204 ***** 2026-02-19 05:47:45.679078 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:47:45.679089 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:47:45.679100 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:47:45.679110 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:47:45.679121 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:47:45.679132 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:47:45.679142 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:47:45.679153 | orchestrator | 2026-02-19 05:47:45.679164 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-19 05:47:45.679175 | orchestrator | Thursday 19 February 2026 05:47:42 +0000 (0:00:01.938) 0:04:29.142 ***** 2026-02-19 05:47:45.679186 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:47:45.679198 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:47:45.679209 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:47:45.679220 | orchestrator | ok: [testbed-node-3] 2026-02-19 05:47:45.679230 | orchestrator | ok: [testbed-node-4] 2026-02-19 05:47:45.679241 | orchestrator | ok: [testbed-node-5] 2026-02-19 05:47:45.679260 | orchestrator | ok: [testbed-manager] 2026-02-19 05:48:31.880647 | orchestrator | 2026-02-19 05:48:31.880779 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-19 05:48:31.880802 | orchestrator | Thursday 19 February 2026 05:47:45 +0000 (0:00:02.741) 0:04:31.884 ***** 2026-02-19 05:48:31.880817 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:31.880830 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:48:31.880843 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:48:31.880855 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:48:31.880866 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:48:31.880947 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:48:31.880965 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:48:31.880978 | orchestrator | 2026-02-19 05:48:31.880992 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-19 05:48:31.881005 | orchestrator | Thursday 19 February 2026 05:47:47 +0000 (0:00:02.309) 0:04:34.193 ***** 2026-02-19 05:48:31.881018 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:31.881031 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:48:31.881044 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:48:31.881058 | orchestrator | skipping: [testbed-node-3] 2026-02-19 05:48:31.881071 | orchestrator | skipping: [testbed-node-4] 2026-02-19 05:48:31.881104 | orchestrator | skipping: [testbed-node-5] 2026-02-19 05:48:31.881117 | orchestrator | skipping: [testbed-manager] 2026-02-19 05:48:31.881130 | orchestrator | 2026-02-19 05:48:31.881142 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-19 05:48:31.881155 | orchestrator | Thursday 19 February 2026 05:47:50 +0000 (0:00:02.234) 0:04:36.427 ***** 2026-02-19 05:48:31.881180 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.881194 | orchestrator | 2026-02-19 05:48:31.881237 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-19 05:48:31.881250 | orchestrator | Thursday 19 February 2026 05:47:52 +0000 (0:00:02.770) 0:04:39.197 ***** 2026-02-19 05:48:31.881262 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:31.881275 | orchestrator | 2026-02-19 05:48:31.881287 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-19 05:48:31.881300 | orchestrator | 2026-02-19 05:48:31.881312 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 05:48:31.881325 | orchestrator | Thursday 19 February 2026 05:47:55 +0000 (0:00:02.130) 0:04:41.328 ***** 2026-02-19 05:48:31.881337 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.881350 | orchestrator | 2026-02-19 05:48:31.881364 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 05:48:31.881377 | orchestrator | Thursday 19 February 2026 05:47:56 +0000 (0:00:01.441) 0:04:42.770 ***** 2026-02-19 05:48:31.881388 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.881401 | orchestrator | 2026-02-19 05:48:31.881413 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-19 05:48:31.881426 | orchestrator | Thursday 19 February 2026 05:47:57 +0000 (0:00:01.109) 0:04:43.880 ***** 2026-02-19 05:48:31.881441 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-19 05:48:31.881456 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-19 05:48:31.881469 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-19 05:48:31.881482 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-19 05:48:31.881495 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-19 05:48:31.881530 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}])  2026-02-19 05:48:31.881546 | orchestrator | 2026-02-19 05:48:31.881559 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-19 05:48:31.881571 | orchestrator | 2026-02-19 05:48:31.881584 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-19 05:48:31.881596 | orchestrator | Thursday 19 February 2026 05:48:08 +0000 (0:00:11.034) 0:04:54.915 ***** 2026-02-19 05:48:31.881617 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.881630 | orchestrator | 2026-02-19 05:48:31.881642 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-19 05:48:31.881654 | orchestrator | Thursday 19 February 2026 05:48:10 +0000 (0:00:01.457) 0:04:56.372 ***** 2026-02-19 05:48:31.881666 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.881678 | orchestrator | 2026-02-19 05:48:31.881690 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-19 05:48:31.881703 | orchestrator | Thursday 19 February 2026 05:48:11 +0000 (0:00:01.100) 0:04:57.473 ***** 2026-02-19 05:48:31.881722 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:31.881736 | orchestrator | 2026-02-19 05:48:31.881749 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-19 05:48:31.881762 | orchestrator | Thursday 19 February 2026 05:48:12 +0000 (0:00:01.141) 0:04:58.614 ***** 2026-02-19 05:48:31.881775 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.881789 | orchestrator | 2026-02-19 05:48:31.881801 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 05:48:31.881814 | orchestrator | Thursday 19 February 2026 05:48:13 +0000 (0:00:01.189) 0:04:59.803 ***** 2026-02-19 05:48:31.881828 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-19 05:48:31.881843 | orchestrator | 2026-02-19 05:48:31.881856 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 05:48:31.881871 | orchestrator | Thursday 19 February 2026 05:48:14 +0000 (0:00:01.116) 0:05:00.919 ***** 2026-02-19 05:48:31.881937 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.881951 | orchestrator | 2026-02-19 05:48:31.881964 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 05:48:31.881977 | orchestrator | Thursday 19 February 2026 05:48:16 +0000 (0:00:01.510) 0:05:02.430 ***** 2026-02-19 05:48:31.881991 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.881999 | orchestrator | 2026-02-19 05:48:31.882007 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 05:48:31.882068 | orchestrator | Thursday 19 February 2026 05:48:17 +0000 (0:00:01.101) 0:05:03.532 ***** 2026-02-19 05:48:31.882077 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.882085 | orchestrator | 2026-02-19 05:48:31.882093 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 05:48:31.882100 | orchestrator | Thursday 19 February 2026 05:48:18 +0000 (0:00:01.452) 0:05:04.984 ***** 2026-02-19 05:48:31.882108 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.882117 | orchestrator | 2026-02-19 05:48:31.882132 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 05:48:31.882157 | orchestrator | Thursday 19 February 2026 05:48:19 +0000 (0:00:01.104) 0:05:06.089 ***** 2026-02-19 05:48:31.882172 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.882186 | orchestrator | 2026-02-19 05:48:31.882199 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 05:48:31.882213 | orchestrator | Thursday 19 February 2026 05:48:21 +0000 (0:00:01.193) 0:05:07.282 ***** 2026-02-19 05:48:31.882226 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.882239 | orchestrator | 2026-02-19 05:48:31.882252 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 05:48:31.882266 | orchestrator | Thursday 19 February 2026 05:48:22 +0000 (0:00:01.117) 0:05:08.400 ***** 2026-02-19 05:48:31.882280 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:31.882293 | orchestrator | 2026-02-19 05:48:31.882307 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 05:48:31.882320 | orchestrator | Thursday 19 February 2026 05:48:23 +0000 (0:00:01.212) 0:05:09.612 ***** 2026-02-19 05:48:31.882333 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.882347 | orchestrator | 2026-02-19 05:48:31.882360 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 05:48:31.882384 | orchestrator | Thursday 19 February 2026 05:48:24 +0000 (0:00:01.119) 0:05:10.732 ***** 2026-02-19 05:48:31.882398 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:48:31.882412 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:48:31.882426 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:48:31.882439 | orchestrator | 2026-02-19 05:48:31.882454 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 05:48:31.882485 | orchestrator | Thursday 19 February 2026 05:48:26 +0000 (0:00:01.671) 0:05:12.403 ***** 2026-02-19 05:48:31.882498 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:31.882509 | orchestrator | 2026-02-19 05:48:31.882522 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 05:48:31.882534 | orchestrator | Thursday 19 February 2026 05:48:27 +0000 (0:00:01.222) 0:05:13.625 ***** 2026-02-19 05:48:31.882546 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:48:31.882558 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:48:31.882571 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:48:31.882583 | orchestrator | 2026-02-19 05:48:31.882595 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 05:48:31.882607 | orchestrator | Thursday 19 February 2026 05:48:30 +0000 (0:00:03.071) 0:05:16.696 ***** 2026-02-19 05:48:31.882619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 05:48:31.882631 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 05:48:31.882660 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 05:48:54.469764 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.469873 | orchestrator | 2026-02-19 05:48:54.469949 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 05:48:54.469959 | orchestrator | Thursday 19 February 2026 05:48:31 +0000 (0:00:01.395) 0:05:18.092 ***** 2026-02-19 05:48:54.469968 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 05:48:54.469977 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 05:48:54.469998 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 05:48:54.470005 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470057 | orchestrator | 2026-02-19 05:48:54.470065 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 05:48:54.470071 | orchestrator | Thursday 19 February 2026 05:48:33 +0000 (0:00:01.859) 0:05:19.951 ***** 2026-02-19 05:48:54.470079 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:48:54.470089 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:48:54.470115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:48:54.470122 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470128 | orchestrator | 2026-02-19 05:48:54.470134 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 05:48:54.470140 | orchestrator | Thursday 19 February 2026 05:48:34 +0000 (0:00:01.146) 0:05:21.097 ***** 2026-02-19 05:48:54.470149 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd0a6e5ab4aac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 05:48:27.910443', 'end': '2026-02-19 05:48:27.954934', 'delta': '0:00:00.044491', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0a6e5ab4aac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 05:48:54.470174 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a8e499fc5d9a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 05:48:28.443267', 'end': '2026-02-19 05:48:28.490025', 'delta': '0:00:00.046758', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8e499fc5d9a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 05:48:54.470185 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7f7671ec0784', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 05:48:29.248869', 'end': '2026-02-19 05:48:29.308746', 'delta': '0:00:00.059877', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7f7671ec0784'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 05:48:54.470192 | orchestrator | 2026-02-19 05:48:54.470198 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 05:48:54.470205 | orchestrator | Thursday 19 February 2026 05:48:36 +0000 (0:00:01.191) 0:05:22.288 ***** 2026-02-19 05:48:54.470211 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:54.470218 | orchestrator | 2026-02-19 05:48:54.470225 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 05:48:54.470231 | orchestrator | Thursday 19 February 2026 05:48:37 +0000 (0:00:01.532) 0:05:23.821 ***** 2026-02-19 05:48:54.470237 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470244 | orchestrator | 2026-02-19 05:48:54.470250 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 05:48:54.470256 | orchestrator | Thursday 19 February 2026 05:48:38 +0000 (0:00:01.211) 0:05:25.033 ***** 2026-02-19 05:48:54.470268 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:54.470274 | orchestrator | 2026-02-19 05:48:54.470281 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 05:48:54.470288 | orchestrator | Thursday 19 February 2026 05:48:39 +0000 (0:00:01.118) 0:05:26.151 ***** 2026-02-19 05:48:54.470296 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-19 05:48:54.470303 | orchestrator | 2026-02-19 05:48:54.470310 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 05:48:54.470317 | orchestrator | Thursday 19 February 2026 05:48:41 +0000 (0:00:01.993) 0:05:28.144 ***** 2026-02-19 05:48:54.470325 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:48:54.470332 | orchestrator | 2026-02-19 05:48:54.470339 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 05:48:54.470346 | orchestrator | Thursday 19 February 2026 05:48:43 +0000 (0:00:01.176) 0:05:29.321 ***** 2026-02-19 05:48:54.470353 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470360 | orchestrator | 2026-02-19 05:48:54.470367 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 05:48:54.470375 | orchestrator | Thursday 19 February 2026 05:48:44 +0000 (0:00:01.113) 0:05:30.435 ***** 2026-02-19 05:48:54.470382 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470389 | orchestrator | 2026-02-19 05:48:54.470396 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 05:48:54.470403 | orchestrator | Thursday 19 February 2026 05:48:45 +0000 (0:00:01.253) 0:05:31.688 ***** 2026-02-19 05:48:54.470410 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470417 | orchestrator | 2026-02-19 05:48:54.470424 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 05:48:54.470431 | orchestrator | Thursday 19 February 2026 05:48:46 +0000 (0:00:01.092) 0:05:32.781 ***** 2026-02-19 05:48:54.470438 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470445 | orchestrator | 2026-02-19 05:48:54.470453 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 05:48:54.470460 | orchestrator | Thursday 19 February 2026 05:48:47 +0000 (0:00:01.125) 0:05:33.906 ***** 2026-02-19 05:48:54.470467 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470474 | orchestrator | 2026-02-19 05:48:54.470481 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 05:48:54.470489 | orchestrator | Thursday 19 February 2026 05:48:48 +0000 (0:00:01.107) 0:05:35.014 ***** 2026-02-19 05:48:54.470496 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470503 | orchestrator | 2026-02-19 05:48:54.470510 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 05:48:54.470517 | orchestrator | Thursday 19 February 2026 05:48:49 +0000 (0:00:01.092) 0:05:36.107 ***** 2026-02-19 05:48:54.470524 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470531 | orchestrator | 2026-02-19 05:48:54.470539 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 05:48:54.470546 | orchestrator | Thursday 19 February 2026 05:48:51 +0000 (0:00:01.119) 0:05:37.227 ***** 2026-02-19 05:48:54.470553 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470560 | orchestrator | 2026-02-19 05:48:54.470567 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 05:48:54.470575 | orchestrator | Thursday 19 February 2026 05:48:52 +0000 (0:00:01.105) 0:05:38.332 ***** 2026-02-19 05:48:54.470582 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:54.470590 | orchestrator | 2026-02-19 05:48:54.470597 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 05:48:54.470604 | orchestrator | Thursday 19 February 2026 05:48:53 +0000 (0:00:01.115) 0:05:39.447 ***** 2026-02-19 05:48:54.470616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:48:55.678439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:48:55.678564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:48:55.678583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:48:55.678599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:48:55.678610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:48:55.678622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:48:55.678665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d17f80a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:48:55.678702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:48:55.678715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:48:55.678727 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:48:55.678741 | orchestrator | 2026-02-19 05:48:55.678754 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 05:48:55.678766 | orchestrator | Thursday 19 February 2026 05:48:54 +0000 (0:00:01.231) 0:05:40.679 ***** 2026-02-19 05:48:55.678779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:48:55.678791 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:48:55.678804 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:48:55.678831 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:49:09.363343 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:49:09.363478 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:49:09.363505 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:49:09.363555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d17f80a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:49:09.363618 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:49:09.363640 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:49:09.363660 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:49:09.363682 | orchestrator | 2026-02-19 05:49:09.363696 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 05:49:09.363709 | orchestrator | Thursday 19 February 2026 05:48:55 +0000 (0:00:01.217) 0:05:41.897 ***** 2026-02-19 05:49:09.363720 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:49:09.363731 | orchestrator | 2026-02-19 05:49:09.363742 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 05:49:09.363753 | orchestrator | Thursday 19 February 2026 05:48:57 +0000 (0:00:01.541) 0:05:43.438 ***** 2026-02-19 05:49:09.363764 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:49:09.363775 | orchestrator | 2026-02-19 05:49:09.363786 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 05:49:09.363797 | orchestrator | Thursday 19 February 2026 05:48:58 +0000 (0:00:01.115) 0:05:44.553 ***** 2026-02-19 05:49:09.363807 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:49:09.363818 | orchestrator | 2026-02-19 05:49:09.363829 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 05:49:09.363839 | orchestrator | Thursday 19 February 2026 05:48:59 +0000 (0:00:01.463) 0:05:46.017 ***** 2026-02-19 05:49:09.363851 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:49:09.363863 | orchestrator | 2026-02-19 05:49:09.363876 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 05:49:09.363916 | orchestrator | Thursday 19 February 2026 05:49:00 +0000 (0:00:01.095) 0:05:47.112 ***** 2026-02-19 05:49:09.363939 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:49:09.363952 | orchestrator | 2026-02-19 05:49:09.363965 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 05:49:09.363977 | orchestrator | Thursday 19 February 2026 05:49:02 +0000 (0:00:01.217) 0:05:48.330 ***** 2026-02-19 05:49:09.363989 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:49:09.364002 | orchestrator | 2026-02-19 05:49:09.364015 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 05:49:09.364028 | orchestrator | Thursday 19 February 2026 05:49:03 +0000 (0:00:01.115) 0:05:49.445 ***** 2026-02-19 05:49:09.364038 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:49:09.364050 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 05:49:09.364061 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 05:49:09.364072 | orchestrator | 2026-02-19 05:49:09.364083 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 05:49:09.364093 | orchestrator | Thursday 19 February 2026 05:49:05 +0000 (0:00:01.863) 0:05:51.309 ***** 2026-02-19 05:49:09.364104 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 05:49:09.364115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 05:49:09.364126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 05:49:09.364137 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:49:09.364148 | orchestrator | 2026-02-19 05:49:09.364159 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 05:49:09.364170 | orchestrator | Thursday 19 February 2026 05:49:06 +0000 (0:00:01.147) 0:05:52.457 ***** 2026-02-19 05:49:09.364181 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:49:09.364192 | orchestrator | 2026-02-19 05:49:09.364202 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 05:49:09.364213 | orchestrator | Thursday 19 February 2026 05:49:07 +0000 (0:00:01.092) 0:05:53.549 ***** 2026-02-19 05:49:09.364224 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:49:09.364235 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:49:09.364247 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:49:09.364258 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:49:09.364269 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:49:09.364290 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:50:10.755594 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:50:10.755709 | orchestrator | 2026-02-19 05:50:10.755726 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 05:50:10.755757 | orchestrator | Thursday 19 February 2026 05:49:09 +0000 (0:00:02.021) 0:05:55.571 ***** 2026-02-19 05:50:10.755769 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:50:10.755781 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:50:10.755792 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:50:10.755803 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:50:10.755813 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:50:10.755824 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:50:10.755835 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:50:10.755846 | orchestrator | 2026-02-19 05:50:10.755857 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-19 05:50:10.755868 | orchestrator | Thursday 19 February 2026 05:49:11 +0000 (0:00:02.642) 0:05:58.213 ***** 2026-02-19 05:50:10.755952 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-19 05:50:10.755965 | orchestrator | 2026-02-19 05:50:10.755976 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-19 05:50:10.755987 | orchestrator | Thursday 19 February 2026 05:49:14 +0000 (0:00:02.388) 0:06:00.602 ***** 2026-02-19 05:50:10.755998 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.756009 | orchestrator | 2026-02-19 05:50:10.756020 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-19 05:50:10.756031 | orchestrator | Thursday 19 February 2026 05:49:15 +0000 (0:00:01.281) 0:06:01.883 ***** 2026-02-19 05:50:10.756044 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.756056 | orchestrator | 2026-02-19 05:50:10.756068 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-19 05:50:10.756081 | orchestrator | Thursday 19 February 2026 05:49:16 +0000 (0:00:01.150) 0:06:03.034 ***** 2026-02-19 05:50:10.756093 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-19 05:50:10.756105 | orchestrator | 2026-02-19 05:50:10.756117 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-19 05:50:10.756129 | orchestrator | Thursday 19 February 2026 05:49:19 +0000 (0:00:02.266) 0:06:05.300 ***** 2026-02-19 05:50:10.756142 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.756154 | orchestrator | 2026-02-19 05:50:10.756166 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-19 05:50:10.756178 | orchestrator | Thursday 19 February 2026 05:49:20 +0000 (0:00:01.113) 0:06:06.414 ***** 2026-02-19 05:50:10.756190 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:50:10.756203 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:50:10.756215 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:50:10.756227 | orchestrator | 2026-02-19 05:50:10.756239 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-19 05:50:10.756253 | orchestrator | Thursday 19 February 2026 05:49:22 +0000 (0:00:02.495) 0:06:08.910 ***** 2026-02-19 05:50:10.756265 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-19 05:50:10.756278 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-19 05:50:10.756292 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-19 05:50:10.756303 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-19 05:50:10.756313 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-19 05:50:10.756325 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-19 05:50:10.756336 | orchestrator | 2026-02-19 05:50:10.756347 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-19 05:50:10.756357 | orchestrator | Thursday 19 February 2026 05:49:36 +0000 (0:00:13.739) 0:06:22.650 ***** 2026-02-19 05:50:10.756368 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:50:10.756379 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:50:10.756390 | orchestrator | 2026-02-19 05:50:10.756401 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-19 05:50:10.756412 | orchestrator | Thursday 19 February 2026 05:49:40 +0000 (0:00:04.104) 0:06:26.754 ***** 2026-02-19 05:50:10.756423 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:50:10.756434 | orchestrator | 2026-02-19 05:50:10.756445 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 05:50:10.756456 | orchestrator | Thursday 19 February 2026 05:49:43 +0000 (0:00:02.730) 0:06:29.485 ***** 2026-02-19 05:50:10.756467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-19 05:50:10.756485 | orchestrator | 2026-02-19 05:50:10.756497 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 05:50:10.756507 | orchestrator | Thursday 19 February 2026 05:49:44 +0000 (0:00:01.453) 0:06:30.939 ***** 2026-02-19 05:50:10.756536 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-19 05:50:10.756547 | orchestrator | 2026-02-19 05:50:10.756558 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 05:50:10.756569 | orchestrator | Thursday 19 February 2026 05:49:46 +0000 (0:00:01.568) 0:06:32.507 ***** 2026-02-19 05:50:10.756580 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:50:10.756591 | orchestrator | 2026-02-19 05:50:10.756608 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 05:50:10.756619 | orchestrator | Thursday 19 February 2026 05:49:47 +0000 (0:00:01.569) 0:06:34.077 ***** 2026-02-19 05:50:10.756630 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.756641 | orchestrator | 2026-02-19 05:50:10.756652 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 05:50:10.756662 | orchestrator | Thursday 19 February 2026 05:49:48 +0000 (0:00:01.103) 0:06:35.180 ***** 2026-02-19 05:50:10.756673 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.756684 | orchestrator | 2026-02-19 05:50:10.756695 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 05:50:10.756706 | orchestrator | Thursday 19 February 2026 05:49:50 +0000 (0:00:01.100) 0:06:36.281 ***** 2026-02-19 05:50:10.756717 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.756728 | orchestrator | 2026-02-19 05:50:10.756739 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 05:50:10.756749 | orchestrator | Thursday 19 February 2026 05:49:51 +0000 (0:00:01.143) 0:06:37.424 ***** 2026-02-19 05:50:10.756760 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:50:10.756771 | orchestrator | 2026-02-19 05:50:10.756782 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 05:50:10.756792 | orchestrator | Thursday 19 February 2026 05:49:52 +0000 (0:00:01.565) 0:06:38.989 ***** 2026-02-19 05:50:10.756803 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.756814 | orchestrator | 2026-02-19 05:50:10.756825 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 05:50:10.756836 | orchestrator | Thursday 19 February 2026 05:49:53 +0000 (0:00:01.139) 0:06:40.129 ***** 2026-02-19 05:50:10.756846 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.756857 | orchestrator | 2026-02-19 05:50:10.756868 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 05:50:10.756879 | orchestrator | Thursday 19 February 2026 05:49:55 +0000 (0:00:01.131) 0:06:41.260 ***** 2026-02-19 05:50:10.756890 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:50:10.756931 | orchestrator | 2026-02-19 05:50:10.756942 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 05:50:10.756953 | orchestrator | Thursday 19 February 2026 05:49:56 +0000 (0:00:01.591) 0:06:42.851 ***** 2026-02-19 05:50:10.756964 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:50:10.756975 | orchestrator | 2026-02-19 05:50:10.756986 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 05:50:10.756997 | orchestrator | Thursday 19 February 2026 05:49:58 +0000 (0:00:01.620) 0:06:44.472 ***** 2026-02-19 05:50:10.757008 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.757019 | orchestrator | 2026-02-19 05:50:10.757030 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 05:50:10.757040 | orchestrator | Thursday 19 February 2026 05:49:59 +0000 (0:00:01.135) 0:06:45.607 ***** 2026-02-19 05:50:10.757051 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:50:10.757062 | orchestrator | 2026-02-19 05:50:10.757073 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 05:50:10.757084 | orchestrator | Thursday 19 February 2026 05:50:00 +0000 (0:00:01.161) 0:06:46.769 ***** 2026-02-19 05:50:10.757103 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.757114 | orchestrator | 2026-02-19 05:50:10.757125 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 05:50:10.757136 | orchestrator | Thursday 19 February 2026 05:50:01 +0000 (0:00:01.133) 0:06:47.903 ***** 2026-02-19 05:50:10.757147 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.757157 | orchestrator | 2026-02-19 05:50:10.757168 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 05:50:10.757179 | orchestrator | Thursday 19 February 2026 05:50:02 +0000 (0:00:01.119) 0:06:49.023 ***** 2026-02-19 05:50:10.757190 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.757201 | orchestrator | 2026-02-19 05:50:10.757212 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 05:50:10.757223 | orchestrator | Thursday 19 February 2026 05:50:03 +0000 (0:00:01.107) 0:06:50.130 ***** 2026-02-19 05:50:10.757234 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.757245 | orchestrator | 2026-02-19 05:50:10.757256 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 05:50:10.757266 | orchestrator | Thursday 19 February 2026 05:50:05 +0000 (0:00:01.129) 0:06:51.260 ***** 2026-02-19 05:50:10.757277 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.757288 | orchestrator | 2026-02-19 05:50:10.757299 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 05:50:10.757310 | orchestrator | Thursday 19 February 2026 05:50:06 +0000 (0:00:01.161) 0:06:52.422 ***** 2026-02-19 05:50:10.757321 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:50:10.757332 | orchestrator | 2026-02-19 05:50:10.757343 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 05:50:10.757354 | orchestrator | Thursday 19 February 2026 05:50:07 +0000 (0:00:01.133) 0:06:53.555 ***** 2026-02-19 05:50:10.757364 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:50:10.757375 | orchestrator | 2026-02-19 05:50:10.757386 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 05:50:10.757397 | orchestrator | Thursday 19 February 2026 05:50:08 +0000 (0:00:01.145) 0:06:54.701 ***** 2026-02-19 05:50:10.757408 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:50:10.757419 | orchestrator | 2026-02-19 05:50:10.757430 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 05:50:10.757441 | orchestrator | Thursday 19 February 2026 05:50:09 +0000 (0:00:01.127) 0:06:55.829 ***** 2026-02-19 05:50:10.757452 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:50:10.757463 | orchestrator | 2026-02-19 05:50:10.757474 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 05:50:10.757492 | orchestrator | Thursday 19 February 2026 05:50:10 +0000 (0:00:01.137) 0:06:56.966 ***** 2026-02-19 05:51:00.844374 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.844503 | orchestrator | 2026-02-19 05:51:00.844526 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 05:51:00.844542 | orchestrator | Thursday 19 February 2026 05:50:11 +0000 (0:00:01.125) 0:06:58.091 ***** 2026-02-19 05:51:00.844574 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.844590 | orchestrator | 2026-02-19 05:51:00.844604 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 05:51:00.844619 | orchestrator | Thursday 19 February 2026 05:50:13 +0000 (0:00:01.163) 0:06:59.255 ***** 2026-02-19 05:51:00.844633 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.844647 | orchestrator | 2026-02-19 05:51:00.844661 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 05:51:00.844674 | orchestrator | Thursday 19 February 2026 05:50:14 +0000 (0:00:01.106) 0:07:00.361 ***** 2026-02-19 05:51:00.844688 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.844702 | orchestrator | 2026-02-19 05:51:00.844715 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 05:51:00.844729 | orchestrator | Thursday 19 February 2026 05:50:15 +0000 (0:00:01.196) 0:07:01.557 ***** 2026-02-19 05:51:00.844767 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.844781 | orchestrator | 2026-02-19 05:51:00.844795 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 05:51:00.844808 | orchestrator | Thursday 19 February 2026 05:50:16 +0000 (0:00:01.166) 0:07:02.724 ***** 2026-02-19 05:51:00.844821 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.844836 | orchestrator | 2026-02-19 05:51:00.844848 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 05:51:00.844864 | orchestrator | Thursday 19 February 2026 05:50:17 +0000 (0:00:01.137) 0:07:03.862 ***** 2026-02-19 05:51:00.844877 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.844890 | orchestrator | 2026-02-19 05:51:00.844903 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 05:51:00.844937 | orchestrator | Thursday 19 February 2026 05:50:18 +0000 (0:00:01.138) 0:07:05.001 ***** 2026-02-19 05:51:00.844951 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.844964 | orchestrator | 2026-02-19 05:51:00.844979 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 05:51:00.844992 | orchestrator | Thursday 19 February 2026 05:50:19 +0000 (0:00:01.107) 0:07:06.109 ***** 2026-02-19 05:51:00.845006 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845019 | orchestrator | 2026-02-19 05:51:00.845033 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 05:51:00.845047 | orchestrator | Thursday 19 February 2026 05:50:21 +0000 (0:00:01.153) 0:07:07.263 ***** 2026-02-19 05:51:00.845061 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845075 | orchestrator | 2026-02-19 05:51:00.845089 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 05:51:00.845102 | orchestrator | Thursday 19 February 2026 05:50:22 +0000 (0:00:01.118) 0:07:08.381 ***** 2026-02-19 05:51:00.845115 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845128 | orchestrator | 2026-02-19 05:51:00.845142 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 05:51:00.845156 | orchestrator | Thursday 19 February 2026 05:50:23 +0000 (0:00:01.132) 0:07:09.514 ***** 2026-02-19 05:51:00.845170 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:51:00.845184 | orchestrator | 2026-02-19 05:51:00.845197 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 05:51:00.845210 | orchestrator | Thursday 19 February 2026 05:50:25 +0000 (0:00:02.023) 0:07:11.538 ***** 2026-02-19 05:51:00.845223 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:51:00.845237 | orchestrator | 2026-02-19 05:51:00.845251 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 05:51:00.845263 | orchestrator | Thursday 19 February 2026 05:50:27 +0000 (0:00:02.551) 0:07:14.089 ***** 2026-02-19 05:51:00.845278 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-19 05:51:00.845289 | orchestrator | 2026-02-19 05:51:00.845297 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 05:51:00.845305 | orchestrator | Thursday 19 February 2026 05:50:29 +0000 (0:00:01.469) 0:07:15.559 ***** 2026-02-19 05:51:00.845313 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845320 | orchestrator | 2026-02-19 05:51:00.845328 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 05:51:00.845336 | orchestrator | Thursday 19 February 2026 05:50:30 +0000 (0:00:01.182) 0:07:16.741 ***** 2026-02-19 05:51:00.845344 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845351 | orchestrator | 2026-02-19 05:51:00.845359 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 05:51:00.845367 | orchestrator | Thursday 19 February 2026 05:50:31 +0000 (0:00:01.108) 0:07:17.850 ***** 2026-02-19 05:51:00.845374 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 05:51:00.845382 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 05:51:00.845398 | orchestrator | 2026-02-19 05:51:00.845406 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 05:51:00.845414 | orchestrator | Thursday 19 February 2026 05:50:33 +0000 (0:00:01.911) 0:07:19.762 ***** 2026-02-19 05:51:00.845422 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:51:00.845430 | orchestrator | 2026-02-19 05:51:00.845437 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 05:51:00.845445 | orchestrator | Thursday 19 February 2026 05:50:35 +0000 (0:00:01.654) 0:07:21.416 ***** 2026-02-19 05:51:00.845453 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845461 | orchestrator | 2026-02-19 05:51:00.845469 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 05:51:00.845477 | orchestrator | Thursday 19 February 2026 05:50:36 +0000 (0:00:01.169) 0:07:22.586 ***** 2026-02-19 05:51:00.845485 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845493 | orchestrator | 2026-02-19 05:51:00.845518 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 05:51:00.845528 | orchestrator | Thursday 19 February 2026 05:50:37 +0000 (0:00:01.136) 0:07:23.722 ***** 2026-02-19 05:51:00.845542 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845555 | orchestrator | 2026-02-19 05:51:00.845575 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 05:51:00.845589 | orchestrator | Thursday 19 February 2026 05:50:38 +0000 (0:00:01.156) 0:07:24.878 ***** 2026-02-19 05:51:00.845603 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-19 05:51:00.845617 | orchestrator | 2026-02-19 05:51:00.845631 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 05:51:00.845644 | orchestrator | Thursday 19 February 2026 05:50:40 +0000 (0:00:01.454) 0:07:26.333 ***** 2026-02-19 05:51:00.845658 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:51:00.845672 | orchestrator | 2026-02-19 05:51:00.845685 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 05:51:00.845698 | orchestrator | Thursday 19 February 2026 05:50:41 +0000 (0:00:01.742) 0:07:28.076 ***** 2026-02-19 05:51:00.845706 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 05:51:00.845714 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 05:51:00.845721 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 05:51:00.845729 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845737 | orchestrator | 2026-02-19 05:51:00.845745 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 05:51:00.845753 | orchestrator | Thursday 19 February 2026 05:50:42 +0000 (0:00:01.147) 0:07:29.223 ***** 2026-02-19 05:51:00.845760 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845768 | orchestrator | 2026-02-19 05:51:00.845776 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 05:51:00.845784 | orchestrator | Thursday 19 February 2026 05:50:44 +0000 (0:00:01.105) 0:07:30.329 ***** 2026-02-19 05:51:00.845791 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845799 | orchestrator | 2026-02-19 05:51:00.845807 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 05:51:00.845815 | orchestrator | Thursday 19 February 2026 05:50:45 +0000 (0:00:01.167) 0:07:31.496 ***** 2026-02-19 05:51:00.845823 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845830 | orchestrator | 2026-02-19 05:51:00.845838 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 05:51:00.845846 | orchestrator | Thursday 19 February 2026 05:50:46 +0000 (0:00:01.118) 0:07:32.614 ***** 2026-02-19 05:51:00.845854 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845861 | orchestrator | 2026-02-19 05:51:00.845869 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 05:51:00.845884 | orchestrator | Thursday 19 February 2026 05:50:47 +0000 (0:00:01.118) 0:07:33.733 ***** 2026-02-19 05:51:00.845891 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.845899 | orchestrator | 2026-02-19 05:51:00.845929 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 05:51:00.845938 | orchestrator | Thursday 19 February 2026 05:50:48 +0000 (0:00:01.135) 0:07:34.869 ***** 2026-02-19 05:51:00.845946 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:51:00.845954 | orchestrator | 2026-02-19 05:51:00.845962 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 05:51:00.845969 | orchestrator | Thursday 19 February 2026 05:50:51 +0000 (0:00:02.715) 0:07:37.584 ***** 2026-02-19 05:51:00.845977 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:51:00.845985 | orchestrator | 2026-02-19 05:51:00.845993 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 05:51:00.846001 | orchestrator | Thursday 19 February 2026 05:50:52 +0000 (0:00:01.126) 0:07:38.711 ***** 2026-02-19 05:51:00.846009 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-19 05:51:00.846068 | orchestrator | 2026-02-19 05:51:00.846077 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 05:51:00.846085 | orchestrator | Thursday 19 February 2026 05:50:53 +0000 (0:00:01.477) 0:07:40.188 ***** 2026-02-19 05:51:00.846093 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.846101 | orchestrator | 2026-02-19 05:51:00.846109 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 05:51:00.846116 | orchestrator | Thursday 19 February 2026 05:50:55 +0000 (0:00:01.131) 0:07:41.320 ***** 2026-02-19 05:51:00.846124 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.846141 | orchestrator | 2026-02-19 05:51:00.846149 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 05:51:00.846157 | orchestrator | Thursday 19 February 2026 05:50:56 +0000 (0:00:01.150) 0:07:42.470 ***** 2026-02-19 05:51:00.846164 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.846172 | orchestrator | 2026-02-19 05:51:00.846180 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 05:51:00.846188 | orchestrator | Thursday 19 February 2026 05:50:57 +0000 (0:00:01.139) 0:07:43.610 ***** 2026-02-19 05:51:00.846196 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.846204 | orchestrator | 2026-02-19 05:51:00.846212 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 05:51:00.846220 | orchestrator | Thursday 19 February 2026 05:50:58 +0000 (0:00:01.139) 0:07:44.750 ***** 2026-02-19 05:51:00.846228 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.846236 | orchestrator | 2026-02-19 05:51:00.846244 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 05:51:00.846252 | orchestrator | Thursday 19 February 2026 05:50:59 +0000 (0:00:01.145) 0:07:45.896 ***** 2026-02-19 05:51:00.846260 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:00.846267 | orchestrator | 2026-02-19 05:51:00.846275 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 05:51:00.846293 | orchestrator | Thursday 19 February 2026 05:51:00 +0000 (0:00:01.161) 0:07:47.057 ***** 2026-02-19 05:51:45.047629 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.047774 | orchestrator | 2026-02-19 05:51:45.047804 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 05:51:45.047836 | orchestrator | Thursday 19 February 2026 05:51:01 +0000 (0:00:01.141) 0:07:48.199 ***** 2026-02-19 05:51:45.047850 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.047861 | orchestrator | 2026-02-19 05:51:45.047873 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 05:51:45.047884 | orchestrator | Thursday 19 February 2026 05:51:03 +0000 (0:00:01.133) 0:07:49.332 ***** 2026-02-19 05:51:45.047901 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:51:45.047983 | orchestrator | 2026-02-19 05:51:45.048005 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 05:51:45.048056 | orchestrator | Thursday 19 February 2026 05:51:04 +0000 (0:00:01.133) 0:07:50.466 ***** 2026-02-19 05:51:45.048071 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-19 05:51:45.048084 | orchestrator | 2026-02-19 05:51:45.048095 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 05:51:45.048106 | orchestrator | Thursday 19 February 2026 05:51:05 +0000 (0:00:01.447) 0:07:51.913 ***** 2026-02-19 05:51:45.048117 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-19 05:51:45.048129 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-19 05:51:45.048142 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-19 05:51:45.048155 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-19 05:51:45.048168 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-19 05:51:45.048180 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-19 05:51:45.048192 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-19 05:51:45.048205 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-19 05:51:45.048217 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 05:51:45.048230 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 05:51:45.048242 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 05:51:45.048255 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 05:51:45.048267 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 05:51:45.048280 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 05:51:45.048299 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-19 05:51:45.048319 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-19 05:51:45.048337 | orchestrator | 2026-02-19 05:51:45.048356 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 05:51:45.048375 | orchestrator | Thursday 19 February 2026 05:51:12 +0000 (0:00:07.111) 0:07:59.025 ***** 2026-02-19 05:51:45.048395 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048415 | orchestrator | 2026-02-19 05:51:45.048436 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 05:51:45.048456 | orchestrator | Thursday 19 February 2026 05:51:13 +0000 (0:00:01.109) 0:08:00.135 ***** 2026-02-19 05:51:45.048472 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048485 | orchestrator | 2026-02-19 05:51:45.048498 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 05:51:45.048510 | orchestrator | Thursday 19 February 2026 05:51:15 +0000 (0:00:01.104) 0:08:01.240 ***** 2026-02-19 05:51:45.048523 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048536 | orchestrator | 2026-02-19 05:51:45.048546 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 05:51:45.048557 | orchestrator | Thursday 19 February 2026 05:51:16 +0000 (0:00:01.176) 0:08:02.416 ***** 2026-02-19 05:51:45.048568 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048578 | orchestrator | 2026-02-19 05:51:45.048589 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 05:51:45.048600 | orchestrator | Thursday 19 February 2026 05:51:17 +0000 (0:00:01.098) 0:08:03.514 ***** 2026-02-19 05:51:45.048611 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048621 | orchestrator | 2026-02-19 05:51:45.048632 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 05:51:45.048643 | orchestrator | Thursday 19 February 2026 05:51:18 +0000 (0:00:01.106) 0:08:04.621 ***** 2026-02-19 05:51:45.048654 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048666 | orchestrator | 2026-02-19 05:51:45.048685 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 05:51:45.048716 | orchestrator | Thursday 19 February 2026 05:51:19 +0000 (0:00:01.092) 0:08:05.714 ***** 2026-02-19 05:51:45.048735 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048753 | orchestrator | 2026-02-19 05:51:45.048773 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 05:51:45.048792 | orchestrator | Thursday 19 February 2026 05:51:20 +0000 (0:00:01.137) 0:08:06.851 ***** 2026-02-19 05:51:45.048810 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048822 | orchestrator | 2026-02-19 05:51:45.048832 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 05:51:45.048843 | orchestrator | Thursday 19 February 2026 05:51:21 +0000 (0:00:01.100) 0:08:07.952 ***** 2026-02-19 05:51:45.048854 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048865 | orchestrator | 2026-02-19 05:51:45.048876 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 05:51:45.048887 | orchestrator | Thursday 19 February 2026 05:51:22 +0000 (0:00:01.129) 0:08:09.082 ***** 2026-02-19 05:51:45.048897 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.048908 | orchestrator | 2026-02-19 05:51:45.048947 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 05:51:45.048981 | orchestrator | Thursday 19 February 2026 05:51:23 +0000 (0:00:01.109) 0:08:10.192 ***** 2026-02-19 05:51:45.048992 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049003 | orchestrator | 2026-02-19 05:51:45.049014 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 05:51:45.049035 | orchestrator | Thursday 19 February 2026 05:51:25 +0000 (0:00:01.092) 0:08:11.285 ***** 2026-02-19 05:51:45.049054 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049072 | orchestrator | 2026-02-19 05:51:45.049090 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 05:51:45.049108 | orchestrator | Thursday 19 February 2026 05:51:26 +0000 (0:00:01.132) 0:08:12.418 ***** 2026-02-19 05:51:45.049127 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049147 | orchestrator | 2026-02-19 05:51:45.049165 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 05:51:45.049184 | orchestrator | Thursday 19 February 2026 05:51:27 +0000 (0:00:01.211) 0:08:13.629 ***** 2026-02-19 05:51:45.049195 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049206 | orchestrator | 2026-02-19 05:51:45.049217 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 05:51:45.049228 | orchestrator | Thursday 19 February 2026 05:51:28 +0000 (0:00:01.110) 0:08:14.740 ***** 2026-02-19 05:51:45.049239 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049249 | orchestrator | 2026-02-19 05:51:45.049260 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 05:51:45.049271 | orchestrator | Thursday 19 February 2026 05:51:29 +0000 (0:00:01.215) 0:08:15.955 ***** 2026-02-19 05:51:45.049281 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049292 | orchestrator | 2026-02-19 05:51:45.049303 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 05:51:45.049313 | orchestrator | Thursday 19 February 2026 05:51:30 +0000 (0:00:01.141) 0:08:17.097 ***** 2026-02-19 05:51:45.049324 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049335 | orchestrator | 2026-02-19 05:51:45.049346 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 05:51:45.049359 | orchestrator | Thursday 19 February 2026 05:51:31 +0000 (0:00:01.102) 0:08:18.199 ***** 2026-02-19 05:51:45.049375 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049392 | orchestrator | 2026-02-19 05:51:45.049422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 05:51:45.049439 | orchestrator | Thursday 19 February 2026 05:51:33 +0000 (0:00:01.135) 0:08:19.335 ***** 2026-02-19 05:51:45.049456 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049487 | orchestrator | 2026-02-19 05:51:45.049505 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 05:51:45.049522 | orchestrator | Thursday 19 February 2026 05:51:34 +0000 (0:00:01.175) 0:08:20.511 ***** 2026-02-19 05:51:45.049540 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049558 | orchestrator | 2026-02-19 05:51:45.049576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 05:51:45.049594 | orchestrator | Thursday 19 February 2026 05:51:35 +0000 (0:00:01.103) 0:08:21.614 ***** 2026-02-19 05:51:45.049613 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049631 | orchestrator | 2026-02-19 05:51:45.049649 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 05:51:45.049666 | orchestrator | Thursday 19 February 2026 05:51:36 +0000 (0:00:01.103) 0:08:22.718 ***** 2026-02-19 05:51:45.049677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 05:51:45.049688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 05:51:45.049699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 05:51:45.049710 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049721 | orchestrator | 2026-02-19 05:51:45.049732 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 05:51:45.049742 | orchestrator | Thursday 19 February 2026 05:51:38 +0000 (0:00:01.518) 0:08:24.237 ***** 2026-02-19 05:51:45.049753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 05:51:45.049764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 05:51:45.049775 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 05:51:45.049785 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049796 | orchestrator | 2026-02-19 05:51:45.049807 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 05:51:45.049818 | orchestrator | Thursday 19 February 2026 05:51:39 +0000 (0:00:01.321) 0:08:25.559 ***** 2026-02-19 05:51:45.049828 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 05:51:45.049839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 05:51:45.049849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 05:51:45.049860 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049871 | orchestrator | 2026-02-19 05:51:45.049882 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 05:51:45.049892 | orchestrator | Thursday 19 February 2026 05:51:40 +0000 (0:00:01.427) 0:08:26.987 ***** 2026-02-19 05:51:45.049903 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.049914 | orchestrator | 2026-02-19 05:51:45.049985 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 05:51:45.049997 | orchestrator | Thursday 19 February 2026 05:51:41 +0000 (0:00:01.137) 0:08:28.125 ***** 2026-02-19 05:51:45.050007 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-19 05:51:45.050085 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:51:45.050100 | orchestrator | 2026-02-19 05:51:45.050111 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 05:51:45.050122 | orchestrator | Thursday 19 February 2026 05:51:43 +0000 (0:00:01.413) 0:08:29.538 ***** 2026-02-19 05:51:45.050133 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:51:45.050144 | orchestrator | 2026-02-19 05:51:45.050155 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-19 05:51:45.050181 | orchestrator | Thursday 19 February 2026 05:51:45 +0000 (0:00:01.723) 0:08:31.262 ***** 2026-02-19 05:52:52.072223 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.072342 | orchestrator | 2026-02-19 05:52:52.072362 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-19 05:52:52.072391 | orchestrator | Thursday 19 February 2026 05:51:46 +0000 (0:00:01.269) 0:08:32.531 ***** 2026-02-19 05:52:52.072403 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-19 05:52:52.072437 | orchestrator | 2026-02-19 05:52:52.072449 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-19 05:52:52.072461 | orchestrator | Thursday 19 February 2026 05:51:47 +0000 (0:00:01.441) 0:08:33.973 ***** 2026-02-19 05:52:52.072473 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-19 05:52:52.072484 | orchestrator | 2026-02-19 05:52:52.072494 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-19 05:52:52.072504 | orchestrator | Thursday 19 February 2026 05:51:50 +0000 (0:00:03.195) 0:08:37.168 ***** 2026-02-19 05:52:52.072516 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:52:52.072549 | orchestrator | 2026-02-19 05:52:52.072562 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-19 05:52:52.072572 | orchestrator | Thursday 19 February 2026 05:51:52 +0000 (0:00:01.143) 0:08:38.312 ***** 2026-02-19 05:52:52.072583 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.072593 | orchestrator | 2026-02-19 05:52:52.072604 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-19 05:52:52.072615 | orchestrator | Thursday 19 February 2026 05:51:53 +0000 (0:00:01.112) 0:08:39.425 ***** 2026-02-19 05:52:52.072625 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.072636 | orchestrator | 2026-02-19 05:52:52.072646 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-19 05:52:52.072655 | orchestrator | Thursday 19 February 2026 05:51:54 +0000 (0:00:01.111) 0:08:40.536 ***** 2026-02-19 05:52:52.072666 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:52:52.072677 | orchestrator | 2026-02-19 05:52:52.072687 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-19 05:52:52.072697 | orchestrator | Thursday 19 February 2026 05:51:56 +0000 (0:00:02.086) 0:08:42.622 ***** 2026-02-19 05:52:52.072708 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.072718 | orchestrator | 2026-02-19 05:52:52.072729 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-19 05:52:52.072739 | orchestrator | Thursday 19 February 2026 05:51:57 +0000 (0:00:01.578) 0:08:44.201 ***** 2026-02-19 05:52:52.072749 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.072760 | orchestrator | 2026-02-19 05:52:52.072769 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-19 05:52:52.072780 | orchestrator | Thursday 19 February 2026 05:51:59 +0000 (0:00:01.498) 0:08:45.699 ***** 2026-02-19 05:52:52.072791 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.072802 | orchestrator | 2026-02-19 05:52:52.072814 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-19 05:52:52.072825 | orchestrator | Thursday 19 February 2026 05:52:00 +0000 (0:00:01.494) 0:08:47.194 ***** 2026-02-19 05:52:52.072837 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.072849 | orchestrator | 2026-02-19 05:52:52.072860 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-19 05:52:52.072870 | orchestrator | Thursday 19 February 2026 05:52:02 +0000 (0:00:01.743) 0:08:48.937 ***** 2026-02-19 05:52:52.072882 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.072893 | orchestrator | 2026-02-19 05:52:52.072904 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-19 05:52:52.072915 | orchestrator | Thursday 19 February 2026 05:52:04 +0000 (0:00:01.651) 0:08:50.588 ***** 2026-02-19 05:52:52.072926 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-19 05:52:52.072960 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 05:52:52.072972 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 05:52:52.072983 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-19 05:52:52.072994 | orchestrator | 2026-02-19 05:52:52.073005 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-19 05:52:52.073016 | orchestrator | Thursday 19 February 2026 05:52:08 +0000 (0:00:03.962) 0:08:54.551 ***** 2026-02-19 05:52:52.073041 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:52:52.073051 | orchestrator | 2026-02-19 05:52:52.073062 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-19 05:52:52.073073 | orchestrator | Thursday 19 February 2026 05:52:10 +0000 (0:00:02.121) 0:08:56.672 ***** 2026-02-19 05:52:52.073084 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073095 | orchestrator | 2026-02-19 05:52:52.073105 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-19 05:52:52.073116 | orchestrator | Thursday 19 February 2026 05:52:11 +0000 (0:00:01.122) 0:08:57.794 ***** 2026-02-19 05:52:52.073127 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073137 | orchestrator | 2026-02-19 05:52:52.073148 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-19 05:52:52.073158 | orchestrator | Thursday 19 February 2026 05:52:12 +0000 (0:00:01.119) 0:08:58.914 ***** 2026-02-19 05:52:52.073169 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073180 | orchestrator | 2026-02-19 05:52:52.073191 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-19 05:52:52.073202 | orchestrator | Thursday 19 February 2026 05:52:14 +0000 (0:00:02.097) 0:09:01.012 ***** 2026-02-19 05:52:52.073213 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073225 | orchestrator | 2026-02-19 05:52:52.073236 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-19 05:52:52.073247 | orchestrator | Thursday 19 February 2026 05:52:16 +0000 (0:00:01.541) 0:09:02.553 ***** 2026-02-19 05:52:52.073257 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:52:52.073268 | orchestrator | 2026-02-19 05:52:52.073279 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-19 05:52:52.073290 | orchestrator | Thursday 19 February 2026 05:52:17 +0000 (0:00:01.113) 0:09:03.667 ***** 2026-02-19 05:52:52.073327 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-19 05:52:52.073342 | orchestrator | 2026-02-19 05:52:52.073353 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-19 05:52:52.073375 | orchestrator | Thursday 19 February 2026 05:52:18 +0000 (0:00:01.464) 0:09:05.132 ***** 2026-02-19 05:52:52.073385 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:52:52.073397 | orchestrator | 2026-02-19 05:52:52.073407 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-19 05:52:52.073419 | orchestrator | Thursday 19 February 2026 05:52:20 +0000 (0:00:01.107) 0:09:06.239 ***** 2026-02-19 05:52:52.073431 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:52:52.073441 | orchestrator | 2026-02-19 05:52:52.073451 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-19 05:52:52.073458 | orchestrator | Thursday 19 February 2026 05:52:21 +0000 (0:00:01.124) 0:09:07.364 ***** 2026-02-19 05:52:52.073465 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-19 05:52:52.073471 | orchestrator | 2026-02-19 05:52:52.073478 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-19 05:52:52.073485 | orchestrator | Thursday 19 February 2026 05:52:22 +0000 (0:00:01.449) 0:09:08.814 ***** 2026-02-19 05:52:52.073491 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073498 | orchestrator | 2026-02-19 05:52:52.073505 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-19 05:52:52.073511 | orchestrator | Thursday 19 February 2026 05:52:24 +0000 (0:00:02.331) 0:09:11.145 ***** 2026-02-19 05:52:52.073518 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073524 | orchestrator | 2026-02-19 05:52:52.073531 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-19 05:52:52.073538 | orchestrator | Thursday 19 February 2026 05:52:26 +0000 (0:00:01.997) 0:09:13.142 ***** 2026-02-19 05:52:52.073545 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073551 | orchestrator | 2026-02-19 05:52:52.073558 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-19 05:52:52.073565 | orchestrator | Thursday 19 February 2026 05:52:29 +0000 (0:00:02.445) 0:09:15.588 ***** 2026-02-19 05:52:52.073583 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:52:52.073589 | orchestrator | 2026-02-19 05:52:52.073596 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-19 05:52:52.073603 | orchestrator | Thursday 19 February 2026 05:52:32 +0000 (0:00:03.472) 0:09:19.061 ***** 2026-02-19 05:52:52.073609 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-19 05:52:52.073616 | orchestrator | 2026-02-19 05:52:52.073623 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-19 05:52:52.073629 | orchestrator | Thursday 19 February 2026 05:52:34 +0000 (0:00:01.531) 0:09:20.592 ***** 2026-02-19 05:52:52.073636 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073642 | orchestrator | 2026-02-19 05:52:52.073649 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-19 05:52:52.073656 | orchestrator | Thursday 19 February 2026 05:52:36 +0000 (0:00:02.536) 0:09:23.129 ***** 2026-02-19 05:52:52.073662 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:52:52.073669 | orchestrator | 2026-02-19 05:52:52.073676 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-19 05:52:52.073682 | orchestrator | Thursday 19 February 2026 05:52:40 +0000 (0:00:03.221) 0:09:26.351 ***** 2026-02-19 05:52:52.073689 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:52:52.073695 | orchestrator | 2026-02-19 05:52:52.073702 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-19 05:52:52.073709 | orchestrator | Thursday 19 February 2026 05:52:41 +0000 (0:00:01.133) 0:09:27.485 ***** 2026-02-19 05:52:52.073718 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-19 05:52:52.073728 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-19 05:52:52.073735 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-19 05:52:52.073742 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-19 05:52:52.073765 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-19 05:53:33.696537 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}])  2026-02-19 05:53:33.696708 | orchestrator | 2026-02-19 05:53:33.696728 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-19 05:53:33.696740 | orchestrator | Thursday 19 February 2026 05:52:52 +0000 (0:00:10.793) 0:09:38.279 ***** 2026-02-19 05:53:33.696751 | orchestrator | changed: [testbed-node-0] 2026-02-19 05:53:33.696762 | orchestrator | 2026-02-19 05:53:33.696772 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 05:53:33.696782 | orchestrator | Thursday 19 February 2026 05:52:54 +0000 (0:00:02.641) 0:09:40.921 ***** 2026-02-19 05:53:33.696792 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 05:53:33.696802 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 05:53:33.696811 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 05:53:33.696821 | orchestrator | 2026-02-19 05:53:33.696831 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 05:53:33.696840 | orchestrator | Thursday 19 February 2026 05:52:56 +0000 (0:00:02.136) 0:09:43.057 ***** 2026-02-19 05:53:33.696850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 05:53:33.696859 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 05:53:33.696869 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 05:53:33.696878 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.696888 | orchestrator | 2026-02-19 05:53:33.696898 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-19 05:53:33.696907 | orchestrator | Thursday 19 February 2026 05:52:58 +0000 (0:00:01.394) 0:09:44.451 ***** 2026-02-19 05:53:33.696917 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.696927 | orchestrator | 2026-02-19 05:53:33.696937 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-19 05:53:33.697000 | orchestrator | Thursday 19 February 2026 05:52:59 +0000 (0:00:01.131) 0:09:45.583 ***** 2026-02-19 05:53:33.697010 | orchestrator | ok: [testbed-node-0] 2026-02-19 05:53:33.697020 | orchestrator | 2026-02-19 05:53:33.697030 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 05:53:33.697039 | orchestrator | Thursday 19 February 2026 05:53:01 +0000 (0:00:02.365) 0:09:47.948 ***** 2026-02-19 05:53:33.697049 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.697058 | orchestrator | 2026-02-19 05:53:33.697068 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-19 05:53:33.697080 | orchestrator | Thursday 19 February 2026 05:53:02 +0000 (0:00:01.097) 0:09:49.046 ***** 2026-02-19 05:53:33.697090 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.697101 | orchestrator | 2026-02-19 05:53:33.697111 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-19 05:53:33.697122 | orchestrator | Thursday 19 February 2026 05:53:03 +0000 (0:00:01.101) 0:09:50.148 ***** 2026-02-19 05:53:33.697133 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.697144 | orchestrator | 2026-02-19 05:53:33.697153 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-19 05:53:33.697163 | orchestrator | Thursday 19 February 2026 05:53:05 +0000 (0:00:01.110) 0:09:51.258 ***** 2026-02-19 05:53:33.697172 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.697182 | orchestrator | 2026-02-19 05:53:33.697191 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-19 05:53:33.697201 | orchestrator | Thursday 19 February 2026 05:53:06 +0000 (0:00:01.149) 0:09:52.408 ***** 2026-02-19 05:53:33.697210 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.697219 | orchestrator | 2026-02-19 05:53:33.697229 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-19 05:53:33.697239 | orchestrator | Thursday 19 February 2026 05:53:07 +0000 (0:00:01.090) 0:09:53.498 ***** 2026-02-19 05:53:33.697248 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.697257 | orchestrator | 2026-02-19 05:53:33.697267 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-19 05:53:33.697285 | orchestrator | Thursday 19 February 2026 05:53:08 +0000 (0:00:01.106) 0:09:54.605 ***** 2026-02-19 05:53:33.697294 | orchestrator | skipping: [testbed-node-0] 2026-02-19 05:53:33.697304 | orchestrator | 2026-02-19 05:53:33.697314 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-19 05:53:33.697324 | orchestrator | 2026-02-19 05:53:33.697334 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-19 05:53:33.697343 | orchestrator | Thursday 19 February 2026 05:53:09 +0000 (0:00:00.958) 0:09:55.564 ***** 2026-02-19 05:53:33.697353 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697362 | orchestrator | 2026-02-19 05:53:33.697372 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-19 05:53:33.697381 | orchestrator | Thursday 19 February 2026 05:53:10 +0000 (0:00:01.163) 0:09:56.727 ***** 2026-02-19 05:53:33.697391 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697400 | orchestrator | 2026-02-19 05:53:33.697413 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-19 05:53:33.697429 | orchestrator | Thursday 19 February 2026 05:53:11 +0000 (0:00:00.748) 0:09:57.475 ***** 2026-02-19 05:53:33.697445 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:33.697462 | orchestrator | 2026-02-19 05:53:33.697479 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-19 05:53:33.697512 | orchestrator | Thursday 19 February 2026 05:53:12 +0000 (0:00:00.758) 0:09:58.234 ***** 2026-02-19 05:53:33.697524 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697533 | orchestrator | 2026-02-19 05:53:33.697560 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 05:53:33.697570 | orchestrator | Thursday 19 February 2026 05:53:12 +0000 (0:00:00.763) 0:09:58.997 ***** 2026-02-19 05:53:33.697580 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-19 05:53:33.697590 | orchestrator | 2026-02-19 05:53:33.697599 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 05:53:33.697609 | orchestrator | Thursday 19 February 2026 05:53:13 +0000 (0:00:01.216) 0:10:00.214 ***** 2026-02-19 05:53:33.697618 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697628 | orchestrator | 2026-02-19 05:53:33.697638 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 05:53:33.697647 | orchestrator | Thursday 19 February 2026 05:53:15 +0000 (0:00:01.486) 0:10:01.700 ***** 2026-02-19 05:53:33.697657 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697666 | orchestrator | 2026-02-19 05:53:33.697676 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 05:53:33.697685 | orchestrator | Thursday 19 February 2026 05:53:16 +0000 (0:00:01.122) 0:10:02.822 ***** 2026-02-19 05:53:33.697695 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697704 | orchestrator | 2026-02-19 05:53:33.697714 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 05:53:33.697724 | orchestrator | Thursday 19 February 2026 05:53:18 +0000 (0:00:01.467) 0:10:04.290 ***** 2026-02-19 05:53:33.697733 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697743 | orchestrator | 2026-02-19 05:53:33.697752 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 05:53:33.697762 | orchestrator | Thursday 19 February 2026 05:53:19 +0000 (0:00:01.123) 0:10:05.413 ***** 2026-02-19 05:53:33.697772 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697781 | orchestrator | 2026-02-19 05:53:33.697791 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 05:53:33.697800 | orchestrator | Thursday 19 February 2026 05:53:20 +0000 (0:00:01.120) 0:10:06.534 ***** 2026-02-19 05:53:33.697810 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697820 | orchestrator | 2026-02-19 05:53:33.697829 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 05:53:33.697839 | orchestrator | Thursday 19 February 2026 05:53:21 +0000 (0:00:01.112) 0:10:07.647 ***** 2026-02-19 05:53:33.697857 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:33.697866 | orchestrator | 2026-02-19 05:53:33.697876 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 05:53:33.697886 | orchestrator | Thursday 19 February 2026 05:53:22 +0000 (0:00:01.119) 0:10:08.766 ***** 2026-02-19 05:53:33.697895 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.697905 | orchestrator | 2026-02-19 05:53:33.697914 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 05:53:33.697924 | orchestrator | Thursday 19 February 2026 05:53:23 +0000 (0:00:01.117) 0:10:09.884 ***** 2026-02-19 05:53:33.697933 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:53:33.698014 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 05:53:33.698103 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:53:33.698120 | orchestrator | 2026-02-19 05:53:33.698136 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 05:53:33.698154 | orchestrator | Thursday 19 February 2026 05:53:25 +0000 (0:00:01.945) 0:10:11.829 ***** 2026-02-19 05:53:33.698171 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:33.698187 | orchestrator | 2026-02-19 05:53:33.698197 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 05:53:33.698206 | orchestrator | Thursday 19 February 2026 05:53:26 +0000 (0:00:01.251) 0:10:13.081 ***** 2026-02-19 05:53:33.698215 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:53:33.698225 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 05:53:33.698235 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:53:33.698244 | orchestrator | 2026-02-19 05:53:33.698254 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 05:53:33.698263 | orchestrator | Thursday 19 February 2026 05:53:29 +0000 (0:00:03.134) 0:10:16.215 ***** 2026-02-19 05:53:33.698272 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 05:53:33.698282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 05:53:33.698291 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 05:53:33.698301 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:33.698310 | orchestrator | 2026-02-19 05:53:33.698320 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 05:53:33.698329 | orchestrator | Thursday 19 February 2026 05:53:31 +0000 (0:00:01.814) 0:10:18.030 ***** 2026-02-19 05:53:33.698341 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 05:53:33.698353 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 05:53:33.698370 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 05:53:33.698380 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:33.698390 | orchestrator | 2026-02-19 05:53:33.698409 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 05:53:55.418221 | orchestrator | Thursday 19 February 2026 05:53:33 +0000 (0:00:01.874) 0:10:19.904 ***** 2026-02-19 05:53:55.418326 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:53:55.418360 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:53:55.418367 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:53:55.418374 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418382 | orchestrator | 2026-02-19 05:53:55.418390 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 05:53:55.418396 | orchestrator | Thursday 19 February 2026 05:53:34 +0000 (0:00:01.183) 0:10:21.088 ***** 2026-02-19 05:53:55.418404 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 05:53:27.451049', 'end': '2026-02-19 05:53:27.506275', 'delta': '0:00:00.055226', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 05:53:55.418413 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'a8e499fc5d9a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 05:53:28.281969', 'end': '2026-02-19 05:53:28.328976', 'delta': '0:00:00.047007', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8e499fc5d9a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 05:53:55.418420 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '7f7671ec0784', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 05:53:28.828456', 'end': '2026-02-19 05:53:28.880037', 'delta': '0:00:00.051581', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7f7671ec0784'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 05:53:55.418437 | orchestrator | 2026-02-19 05:53:55.418517 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 05:53:55.418538 | orchestrator | Thursday 19 February 2026 05:53:36 +0000 (0:00:01.209) 0:10:22.298 ***** 2026-02-19 05:53:55.418545 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:55.418561 | orchestrator | 2026-02-19 05:53:55.418582 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 05:53:55.418589 | orchestrator | Thursday 19 February 2026 05:53:37 +0000 (0:00:01.225) 0:10:23.524 ***** 2026-02-19 05:53:55.418595 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418602 | orchestrator | 2026-02-19 05:53:55.418608 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 05:53:55.418614 | orchestrator | Thursday 19 February 2026 05:53:38 +0000 (0:00:01.233) 0:10:24.758 ***** 2026-02-19 05:53:55.418620 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:55.418626 | orchestrator | 2026-02-19 05:53:55.418633 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 05:53:55.418639 | orchestrator | Thursday 19 February 2026 05:53:39 +0000 (0:00:01.155) 0:10:25.913 ***** 2026-02-19 05:53:55.418645 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:53:55.418652 | orchestrator | 2026-02-19 05:53:55.418658 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 05:53:55.418664 | orchestrator | Thursday 19 February 2026 05:53:42 +0000 (0:00:03.092) 0:10:29.006 ***** 2026-02-19 05:53:55.418670 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:53:55.418676 | orchestrator | 2026-02-19 05:53:55.418682 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 05:53:55.418688 | orchestrator | Thursday 19 February 2026 05:53:43 +0000 (0:00:01.165) 0:10:30.171 ***** 2026-02-19 05:53:55.418694 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418701 | orchestrator | 2026-02-19 05:53:55.418707 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 05:53:55.418713 | orchestrator | Thursday 19 February 2026 05:53:45 +0000 (0:00:01.117) 0:10:31.289 ***** 2026-02-19 05:53:55.418719 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418725 | orchestrator | 2026-02-19 05:53:55.418731 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 05:53:55.418737 | orchestrator | Thursday 19 February 2026 05:53:46 +0000 (0:00:01.260) 0:10:32.550 ***** 2026-02-19 05:53:55.418744 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418750 | orchestrator | 2026-02-19 05:53:55.418756 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 05:53:55.418762 | orchestrator | Thursday 19 February 2026 05:53:47 +0000 (0:00:01.106) 0:10:33.657 ***** 2026-02-19 05:53:55.418768 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418774 | orchestrator | 2026-02-19 05:53:55.418780 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 05:53:55.418786 | orchestrator | Thursday 19 February 2026 05:53:48 +0000 (0:00:01.108) 0:10:34.765 ***** 2026-02-19 05:53:55.418793 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418799 | orchestrator | 2026-02-19 05:53:55.418805 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 05:53:55.418811 | orchestrator | Thursday 19 February 2026 05:53:49 +0000 (0:00:01.121) 0:10:35.887 ***** 2026-02-19 05:53:55.418817 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418824 | orchestrator | 2026-02-19 05:53:55.418830 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 05:53:55.418836 | orchestrator | Thursday 19 February 2026 05:53:50 +0000 (0:00:01.151) 0:10:37.039 ***** 2026-02-19 05:53:55.418842 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418848 | orchestrator | 2026-02-19 05:53:55.418854 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 05:53:55.418860 | orchestrator | Thursday 19 February 2026 05:53:51 +0000 (0:00:01.103) 0:10:38.142 ***** 2026-02-19 05:53:55.418867 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418873 | orchestrator | 2026-02-19 05:53:55.418879 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 05:53:55.418886 | orchestrator | Thursday 19 February 2026 05:53:53 +0000 (0:00:01.109) 0:10:39.252 ***** 2026-02-19 05:53:55.418897 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:55.418903 | orchestrator | 2026-02-19 05:53:55.418909 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 05:53:55.418918 | orchestrator | Thursday 19 February 2026 05:53:54 +0000 (0:00:01.119) 0:10:40.371 ***** 2026-02-19 05:53:55.418929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:53:55.418967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:53:55.418987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:53:55.419006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:53:56.709186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:53:56.709269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:53:56.709277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:53:56.709287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5b78108', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:53:56.709322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:53:56.709342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:53:56.709348 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:53:56.709355 | orchestrator | 2026-02-19 05:53:56.709362 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 05:53:56.709368 | orchestrator | Thursday 19 February 2026 05:53:55 +0000 (0:00:01.258) 0:10:41.630 ***** 2026-02-19 05:53:56.709376 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:53:56.709383 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:53:56.709393 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:53:56.709399 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:53:56.709406 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:53:56.709416 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:54:13.464628 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:54:13.464749 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5b78108', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:54:13.464779 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:54:13.464796 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:54:13.464802 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:13.464808 | orchestrator | 2026-02-19 05:54:13.464814 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 05:54:13.464820 | orchestrator | Thursday 19 February 2026 05:53:56 +0000 (0:00:01.294) 0:10:42.925 ***** 2026-02-19 05:54:13.464824 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:13.464830 | orchestrator | 2026-02-19 05:54:13.464834 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 05:54:13.464838 | orchestrator | Thursday 19 February 2026 05:53:58 +0000 (0:00:01.496) 0:10:44.421 ***** 2026-02-19 05:54:13.464843 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:13.464847 | orchestrator | 2026-02-19 05:54:13.464851 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 05:54:13.464856 | orchestrator | Thursday 19 February 2026 05:53:59 +0000 (0:00:01.124) 0:10:45.545 ***** 2026-02-19 05:54:13.464860 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:13.464864 | orchestrator | 2026-02-19 05:54:13.464873 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 05:54:13.464877 | orchestrator | Thursday 19 February 2026 05:54:00 +0000 (0:00:01.536) 0:10:47.082 ***** 2026-02-19 05:54:13.464882 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:13.464886 | orchestrator | 2026-02-19 05:54:13.464891 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 05:54:13.464895 | orchestrator | Thursday 19 February 2026 05:54:01 +0000 (0:00:01.116) 0:10:48.198 ***** 2026-02-19 05:54:13.464899 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:13.464904 | orchestrator | 2026-02-19 05:54:13.464908 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 05:54:13.464912 | orchestrator | Thursday 19 February 2026 05:54:03 +0000 (0:00:01.223) 0:10:49.422 ***** 2026-02-19 05:54:13.464916 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:13.464921 | orchestrator | 2026-02-19 05:54:13.464925 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 05:54:13.464929 | orchestrator | Thursday 19 February 2026 05:54:04 +0000 (0:00:01.117) 0:10:50.540 ***** 2026-02-19 05:54:13.464934 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-19 05:54:13.464939 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 05:54:13.464943 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-19 05:54:13.464947 | orchestrator | 2026-02-19 05:54:13.464987 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 05:54:13.464992 | orchestrator | Thursday 19 February 2026 05:54:06 +0000 (0:00:01.946) 0:10:52.486 ***** 2026-02-19 05:54:13.464996 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 05:54:13.465001 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 05:54:13.465005 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 05:54:13.465010 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:13.465014 | orchestrator | 2026-02-19 05:54:13.465018 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 05:54:13.465023 | orchestrator | Thursday 19 February 2026 05:54:07 +0000 (0:00:01.176) 0:10:53.663 ***** 2026-02-19 05:54:13.465028 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:13.465032 | orchestrator | 2026-02-19 05:54:13.465037 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 05:54:13.465041 | orchestrator | Thursday 19 February 2026 05:54:08 +0000 (0:00:01.204) 0:10:54.868 ***** 2026-02-19 05:54:13.465045 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:54:13.465050 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 05:54:13.465055 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:54:13.465059 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:54:13.465063 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:54:13.465068 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:54:13.465072 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:54:13.465077 | orchestrator | 2026-02-19 05:54:13.465081 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 05:54:13.465085 | orchestrator | Thursday 19 February 2026 05:54:10 +0000 (0:00:01.770) 0:10:56.638 ***** 2026-02-19 05:54:13.465090 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:54:13.465094 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 05:54:13.465102 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 05:54:13.465106 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:54:13.465117 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:54:13.465121 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:54:13.465126 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:54:13.465130 | orchestrator | 2026-02-19 05:54:13.465134 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-19 05:54:13.465139 | orchestrator | Thursday 19 February 2026 05:54:12 +0000 (0:00:02.114) 0:10:58.752 ***** 2026-02-19 05:54:13.465143 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:13.465147 | orchestrator | 2026-02-19 05:54:13.465152 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-19 05:54:13.465159 | orchestrator | Thursday 19 February 2026 05:54:13 +0000 (0:00:00.918) 0:10:59.671 ***** 2026-02-19 05:54:52.812364 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.812570 | orchestrator | 2026-02-19 05:54:52.812593 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-19 05:54:52.812607 | orchestrator | Thursday 19 February 2026 05:54:14 +0000 (0:00:00.859) 0:11:00.530 ***** 2026-02-19 05:54:52.812618 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.812628 | orchestrator | 2026-02-19 05:54:52.812639 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-19 05:54:52.812649 | orchestrator | Thursday 19 February 2026 05:54:15 +0000 (0:00:00.758) 0:11:01.289 ***** 2026-02-19 05:54:52.812659 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.812669 | orchestrator | 2026-02-19 05:54:52.812678 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-19 05:54:52.812688 | orchestrator | Thursday 19 February 2026 05:54:15 +0000 (0:00:00.876) 0:11:02.165 ***** 2026-02-19 05:54:52.812698 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.812707 | orchestrator | 2026-02-19 05:54:52.812717 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-19 05:54:52.812727 | orchestrator | Thursday 19 February 2026 05:54:16 +0000 (0:00:00.798) 0:11:02.964 ***** 2026-02-19 05:54:52.812737 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 05:54:52.812747 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 05:54:52.812757 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 05:54:52.812766 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.812776 | orchestrator | 2026-02-19 05:54:52.812786 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-19 05:54:52.812796 | orchestrator | Thursday 19 February 2026 05:54:17 +0000 (0:00:01.032) 0:11:03.997 ***** 2026-02-19 05:54:52.812805 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-19 05:54:52.812816 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-19 05:54:52.812826 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-19 05:54:52.812835 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-19 05:54:52.812845 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-19 05:54:52.812855 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-19 05:54:52.812870 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.812891 | orchestrator | 2026-02-19 05:54:52.812917 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-19 05:54:52.812935 | orchestrator | Thursday 19 February 2026 05:54:19 +0000 (0:00:01.541) 0:11:05.539 ***** 2026-02-19 05:54:52.812953 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 05:54:52.813004 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 05:54:52.813022 | orchestrator | 2026-02-19 05:54:52.813037 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-19 05:54:52.813087 | orchestrator | Thursday 19 February 2026 05:54:22 +0000 (0:00:03.206) 0:11:08.745 ***** 2026-02-19 05:54:52.813106 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:54:52.813121 | orchestrator | 2026-02-19 05:54:52.813138 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 05:54:52.813154 | orchestrator | Thursday 19 February 2026 05:54:24 +0000 (0:00:02.256) 0:11:11.002 ***** 2026-02-19 05:54:52.813171 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-19 05:54:52.813189 | orchestrator | 2026-02-19 05:54:52.813206 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 05:54:52.813221 | orchestrator | Thursday 19 February 2026 05:54:25 +0000 (0:00:01.136) 0:11:12.139 ***** 2026-02-19 05:54:52.813238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-19 05:54:52.813254 | orchestrator | 2026-02-19 05:54:52.813273 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 05:54:52.813286 | orchestrator | Thursday 19 February 2026 05:54:27 +0000 (0:00:01.128) 0:11:13.267 ***** 2026-02-19 05:54:52.813296 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:52.813305 | orchestrator | 2026-02-19 05:54:52.813315 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 05:54:52.813325 | orchestrator | Thursday 19 February 2026 05:54:28 +0000 (0:00:01.496) 0:11:14.763 ***** 2026-02-19 05:54:52.813334 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813344 | orchestrator | 2026-02-19 05:54:52.813354 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 05:54:52.813378 | orchestrator | Thursday 19 February 2026 05:54:29 +0000 (0:00:01.112) 0:11:15.876 ***** 2026-02-19 05:54:52.813388 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813398 | orchestrator | 2026-02-19 05:54:52.813408 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 05:54:52.813417 | orchestrator | Thursday 19 February 2026 05:54:30 +0000 (0:00:01.139) 0:11:17.016 ***** 2026-02-19 05:54:52.813427 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813436 | orchestrator | 2026-02-19 05:54:52.813446 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 05:54:52.813456 | orchestrator | Thursday 19 February 2026 05:54:31 +0000 (0:00:01.130) 0:11:18.146 ***** 2026-02-19 05:54:52.813465 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:52.813475 | orchestrator | 2026-02-19 05:54:52.813484 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 05:54:52.813494 | orchestrator | Thursday 19 February 2026 05:54:33 +0000 (0:00:01.602) 0:11:19.749 ***** 2026-02-19 05:54:52.813503 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813513 | orchestrator | 2026-02-19 05:54:52.813523 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 05:54:52.813555 | orchestrator | Thursday 19 February 2026 05:54:34 +0000 (0:00:01.121) 0:11:20.870 ***** 2026-02-19 05:54:52.813565 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813575 | orchestrator | 2026-02-19 05:54:52.813585 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 05:54:52.813594 | orchestrator | Thursday 19 February 2026 05:54:35 +0000 (0:00:01.119) 0:11:21.990 ***** 2026-02-19 05:54:52.813604 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:52.813613 | orchestrator | 2026-02-19 05:54:52.813623 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 05:54:52.813632 | orchestrator | Thursday 19 February 2026 05:54:37 +0000 (0:00:01.556) 0:11:23.546 ***** 2026-02-19 05:54:52.813642 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:52.813652 | orchestrator | 2026-02-19 05:54:52.813661 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 05:54:52.813671 | orchestrator | Thursday 19 February 2026 05:54:38 +0000 (0:00:01.537) 0:11:25.084 ***** 2026-02-19 05:54:52.813680 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813700 | orchestrator | 2026-02-19 05:54:52.813710 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 05:54:52.813720 | orchestrator | Thursday 19 February 2026 05:54:39 +0000 (0:00:00.754) 0:11:25.838 ***** 2026-02-19 05:54:52.813729 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:52.813739 | orchestrator | 2026-02-19 05:54:52.813748 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 05:54:52.813758 | orchestrator | Thursday 19 February 2026 05:54:40 +0000 (0:00:00.791) 0:11:26.629 ***** 2026-02-19 05:54:52.813767 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813777 | orchestrator | 2026-02-19 05:54:52.813786 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 05:54:52.813796 | orchestrator | Thursday 19 February 2026 05:54:41 +0000 (0:00:00.769) 0:11:27.399 ***** 2026-02-19 05:54:52.813805 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813815 | orchestrator | 2026-02-19 05:54:52.813824 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 05:54:52.813834 | orchestrator | Thursday 19 February 2026 05:54:41 +0000 (0:00:00.747) 0:11:28.147 ***** 2026-02-19 05:54:52.813843 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813853 | orchestrator | 2026-02-19 05:54:52.813863 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 05:54:52.813872 | orchestrator | Thursday 19 February 2026 05:54:42 +0000 (0:00:00.773) 0:11:28.920 ***** 2026-02-19 05:54:52.813882 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813891 | orchestrator | 2026-02-19 05:54:52.813901 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 05:54:52.813910 | orchestrator | Thursday 19 February 2026 05:54:43 +0000 (0:00:00.787) 0:11:29.708 ***** 2026-02-19 05:54:52.813920 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.813929 | orchestrator | 2026-02-19 05:54:52.813939 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 05:54:52.813948 | orchestrator | Thursday 19 February 2026 05:54:44 +0000 (0:00:00.809) 0:11:30.517 ***** 2026-02-19 05:54:52.813958 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:52.814002 | orchestrator | 2026-02-19 05:54:52.814013 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 05:54:52.814080 | orchestrator | Thursday 19 February 2026 05:54:45 +0000 (0:00:00.840) 0:11:31.358 ***** 2026-02-19 05:54:52.814091 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:52.814100 | orchestrator | 2026-02-19 05:54:52.814110 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 05:54:52.814119 | orchestrator | Thursday 19 February 2026 05:54:45 +0000 (0:00:00.822) 0:11:32.180 ***** 2026-02-19 05:54:52.814129 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:54:52.814138 | orchestrator | 2026-02-19 05:54:52.814148 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 05:54:52.814157 | orchestrator | Thursday 19 February 2026 05:54:46 +0000 (0:00:00.820) 0:11:33.001 ***** 2026-02-19 05:54:52.814167 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.814176 | orchestrator | 2026-02-19 05:54:52.814186 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 05:54:52.814195 | orchestrator | Thursday 19 February 2026 05:54:47 +0000 (0:00:00.760) 0:11:33.762 ***** 2026-02-19 05:54:52.814205 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.814214 | orchestrator | 2026-02-19 05:54:52.814224 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 05:54:52.814234 | orchestrator | Thursday 19 February 2026 05:54:48 +0000 (0:00:00.743) 0:11:34.505 ***** 2026-02-19 05:54:52.814243 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.814253 | orchestrator | 2026-02-19 05:54:52.814262 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 05:54:52.814272 | orchestrator | Thursday 19 February 2026 05:54:49 +0000 (0:00:00.760) 0:11:35.265 ***** 2026-02-19 05:54:52.814281 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.814298 | orchestrator | 2026-02-19 05:54:52.814314 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 05:54:52.814323 | orchestrator | Thursday 19 February 2026 05:54:49 +0000 (0:00:00.761) 0:11:36.027 ***** 2026-02-19 05:54:52.814333 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.814342 | orchestrator | 2026-02-19 05:54:52.814352 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 05:54:52.814361 | orchestrator | Thursday 19 February 2026 05:54:50 +0000 (0:00:00.768) 0:11:36.795 ***** 2026-02-19 05:54:52.814371 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.814381 | orchestrator | 2026-02-19 05:54:52.814390 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 05:54:52.814400 | orchestrator | Thursday 19 February 2026 05:54:51 +0000 (0:00:00.737) 0:11:37.533 ***** 2026-02-19 05:54:52.814409 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.814419 | orchestrator | 2026-02-19 05:54:52.814428 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 05:54:52.814438 | orchestrator | Thursday 19 February 2026 05:54:52 +0000 (0:00:00.745) 0:11:38.278 ***** 2026-02-19 05:54:52.814448 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:54:52.814458 | orchestrator | 2026-02-19 05:54:52.814476 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 05:55:41.673831 | orchestrator | Thursday 19 February 2026 05:54:52 +0000 (0:00:00.743) 0:11:39.022 ***** 2026-02-19 05:55:41.673923 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.673934 | orchestrator | 2026-02-19 05:55:41.673942 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 05:55:41.673949 | orchestrator | Thursday 19 February 2026 05:54:53 +0000 (0:00:00.758) 0:11:39.781 ***** 2026-02-19 05:55:41.673956 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.673963 | orchestrator | 2026-02-19 05:55:41.673969 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 05:55:41.674085 | orchestrator | Thursday 19 February 2026 05:54:54 +0000 (0:00:00.770) 0:11:40.551 ***** 2026-02-19 05:55:41.674092 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674099 | orchestrator | 2026-02-19 05:55:41.674105 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 05:55:41.674112 | orchestrator | Thursday 19 February 2026 05:54:55 +0000 (0:00:00.738) 0:11:41.290 ***** 2026-02-19 05:55:41.674118 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674125 | orchestrator | 2026-02-19 05:55:41.674132 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 05:55:41.674138 | orchestrator | Thursday 19 February 2026 05:54:55 +0000 (0:00:00.780) 0:11:42.070 ***** 2026-02-19 05:55:41.674144 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:55:41.674152 | orchestrator | 2026-02-19 05:55:41.674158 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 05:55:41.674165 | orchestrator | Thursday 19 February 2026 05:54:57 +0000 (0:00:01.547) 0:11:43.617 ***** 2026-02-19 05:55:41.674171 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:55:41.674177 | orchestrator | 2026-02-19 05:55:41.674183 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 05:55:41.674190 | orchestrator | Thursday 19 February 2026 05:54:59 +0000 (0:00:02.146) 0:11:45.764 ***** 2026-02-19 05:55:41.674196 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-19 05:55:41.674204 | orchestrator | 2026-02-19 05:55:41.674210 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 05:55:41.674216 | orchestrator | Thursday 19 February 2026 05:55:00 +0000 (0:00:01.133) 0:11:46.898 ***** 2026-02-19 05:55:41.674223 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674229 | orchestrator | 2026-02-19 05:55:41.674235 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 05:55:41.674242 | orchestrator | Thursday 19 February 2026 05:55:01 +0000 (0:00:01.105) 0:11:48.004 ***** 2026-02-19 05:55:41.674268 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674275 | orchestrator | 2026-02-19 05:55:41.674281 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 05:55:41.674287 | orchestrator | Thursday 19 February 2026 05:55:02 +0000 (0:00:01.189) 0:11:49.194 ***** 2026-02-19 05:55:41.674293 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 05:55:41.674300 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 05:55:41.674307 | orchestrator | 2026-02-19 05:55:41.674313 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 05:55:41.674319 | orchestrator | Thursday 19 February 2026 05:55:04 +0000 (0:00:01.852) 0:11:51.046 ***** 2026-02-19 05:55:41.674325 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:55:41.674331 | orchestrator | 2026-02-19 05:55:41.674337 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 05:55:41.674344 | orchestrator | Thursday 19 February 2026 05:55:06 +0000 (0:00:01.488) 0:11:52.535 ***** 2026-02-19 05:55:41.674350 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674356 | orchestrator | 2026-02-19 05:55:41.674362 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 05:55:41.674368 | orchestrator | Thursday 19 February 2026 05:55:07 +0000 (0:00:01.106) 0:11:53.641 ***** 2026-02-19 05:55:41.674374 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674381 | orchestrator | 2026-02-19 05:55:41.674388 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 05:55:41.674395 | orchestrator | Thursday 19 February 2026 05:55:08 +0000 (0:00:00.756) 0:11:54.398 ***** 2026-02-19 05:55:41.674402 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674409 | orchestrator | 2026-02-19 05:55:41.674416 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 05:55:41.674422 | orchestrator | Thursday 19 February 2026 05:55:08 +0000 (0:00:00.762) 0:11:55.161 ***** 2026-02-19 05:55:41.674429 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-19 05:55:41.674436 | orchestrator | 2026-02-19 05:55:41.674455 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 05:55:41.674462 | orchestrator | Thursday 19 February 2026 05:55:10 +0000 (0:00:01.096) 0:11:56.258 ***** 2026-02-19 05:55:41.674469 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:55:41.674476 | orchestrator | 2026-02-19 05:55:41.674483 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 05:55:41.674490 | orchestrator | Thursday 19 February 2026 05:55:12 +0000 (0:00:02.681) 0:11:58.939 ***** 2026-02-19 05:55:41.674497 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 05:55:41.674504 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 05:55:41.674511 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 05:55:41.674517 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674524 | orchestrator | 2026-02-19 05:55:41.674531 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 05:55:41.674538 | orchestrator | Thursday 19 February 2026 05:55:13 +0000 (0:00:01.156) 0:12:00.095 ***** 2026-02-19 05:55:41.674545 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674552 | orchestrator | 2026-02-19 05:55:41.674573 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 05:55:41.674580 | orchestrator | Thursday 19 February 2026 05:55:14 +0000 (0:00:01.118) 0:12:01.214 ***** 2026-02-19 05:55:41.674587 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674594 | orchestrator | 2026-02-19 05:55:41.674601 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 05:55:41.674608 | orchestrator | Thursday 19 February 2026 05:55:16 +0000 (0:00:01.193) 0:12:02.408 ***** 2026-02-19 05:55:41.674621 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674628 | orchestrator | 2026-02-19 05:55:41.674635 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 05:55:41.674642 | orchestrator | Thursday 19 February 2026 05:55:17 +0000 (0:00:01.143) 0:12:03.551 ***** 2026-02-19 05:55:41.674649 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674655 | orchestrator | 2026-02-19 05:55:41.674662 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 05:55:41.674669 | orchestrator | Thursday 19 February 2026 05:55:18 +0000 (0:00:01.125) 0:12:04.677 ***** 2026-02-19 05:55:41.674676 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674683 | orchestrator | 2026-02-19 05:55:41.674690 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 05:55:41.674697 | orchestrator | Thursday 19 February 2026 05:55:19 +0000 (0:00:00.776) 0:12:05.453 ***** 2026-02-19 05:55:41.674704 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:55:41.674711 | orchestrator | 2026-02-19 05:55:41.674718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 05:55:41.674726 | orchestrator | Thursday 19 February 2026 05:55:21 +0000 (0:00:02.360) 0:12:07.813 ***** 2026-02-19 05:55:41.674733 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:55:41.674740 | orchestrator | 2026-02-19 05:55:41.674746 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 05:55:41.674752 | orchestrator | Thursday 19 February 2026 05:55:22 +0000 (0:00:00.771) 0:12:08.585 ***** 2026-02-19 05:55:41.674759 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-19 05:55:41.674765 | orchestrator | 2026-02-19 05:55:41.674771 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 05:55:41.674777 | orchestrator | Thursday 19 February 2026 05:55:23 +0000 (0:00:01.134) 0:12:09.720 ***** 2026-02-19 05:55:41.674783 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674789 | orchestrator | 2026-02-19 05:55:41.674795 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 05:55:41.674801 | orchestrator | Thursday 19 February 2026 05:55:24 +0000 (0:00:01.173) 0:12:10.894 ***** 2026-02-19 05:55:41.674807 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674813 | orchestrator | 2026-02-19 05:55:41.674819 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 05:55:41.674825 | orchestrator | Thursday 19 February 2026 05:55:25 +0000 (0:00:01.152) 0:12:12.046 ***** 2026-02-19 05:55:41.674831 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674837 | orchestrator | 2026-02-19 05:55:41.674844 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 05:55:41.674850 | orchestrator | Thursday 19 February 2026 05:55:26 +0000 (0:00:01.139) 0:12:13.186 ***** 2026-02-19 05:55:41.674856 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674862 | orchestrator | 2026-02-19 05:55:41.674868 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 05:55:41.674874 | orchestrator | Thursday 19 February 2026 05:55:28 +0000 (0:00:01.129) 0:12:14.316 ***** 2026-02-19 05:55:41.674880 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674886 | orchestrator | 2026-02-19 05:55:41.674892 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 05:55:41.674898 | orchestrator | Thursday 19 February 2026 05:55:29 +0000 (0:00:01.153) 0:12:15.470 ***** 2026-02-19 05:55:41.674904 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674911 | orchestrator | 2026-02-19 05:55:41.674917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 05:55:41.674923 | orchestrator | Thursday 19 February 2026 05:55:30 +0000 (0:00:01.099) 0:12:16.569 ***** 2026-02-19 05:55:41.674929 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674935 | orchestrator | 2026-02-19 05:55:41.674941 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 05:55:41.674947 | orchestrator | Thursday 19 February 2026 05:55:31 +0000 (0:00:01.160) 0:12:17.730 ***** 2026-02-19 05:55:41.674957 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:55:41.674964 | orchestrator | 2026-02-19 05:55:41.674970 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 05:55:41.674994 | orchestrator | Thursday 19 February 2026 05:55:32 +0000 (0:00:01.125) 0:12:18.855 ***** 2026-02-19 05:55:41.675010 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:55:41.675017 | orchestrator | 2026-02-19 05:55:41.675033 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 05:55:41.675040 | orchestrator | Thursday 19 February 2026 05:55:33 +0000 (0:00:01.255) 0:12:20.111 ***** 2026-02-19 05:55:41.675046 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-19 05:55:41.675052 | orchestrator | 2026-02-19 05:55:41.675058 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 05:55:41.675064 | orchestrator | Thursday 19 February 2026 05:55:34 +0000 (0:00:01.106) 0:12:21.218 ***** 2026-02-19 05:55:41.675070 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-19 05:55:41.675077 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-19 05:55:41.675083 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-19 05:55:41.675090 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-19 05:55:41.675096 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-19 05:55:41.675102 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-19 05:55:41.675108 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-19 05:55:41.675118 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-19 05:56:15.205158 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 05:56:15.205292 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 05:56:15.205325 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 05:56:15.205346 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 05:56:15.205365 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 05:56:15.205384 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 05:56:15.205403 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-19 05:56:15.205422 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-19 05:56:15.205441 | orchestrator | 2026-02-19 05:56:15.205463 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 05:56:15.205483 | orchestrator | Thursday 19 February 2026 05:55:41 +0000 (0:00:06.663) 0:12:27.881 ***** 2026-02-19 05:56:15.205503 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205525 | orchestrator | 2026-02-19 05:56:15.205545 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 05:56:15.205563 | orchestrator | Thursday 19 February 2026 05:55:42 +0000 (0:00:00.812) 0:12:28.694 ***** 2026-02-19 05:56:15.205574 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205585 | orchestrator | 2026-02-19 05:56:15.205596 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 05:56:15.205607 | orchestrator | Thursday 19 February 2026 05:55:43 +0000 (0:00:00.765) 0:12:29.459 ***** 2026-02-19 05:56:15.205618 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205630 | orchestrator | 2026-02-19 05:56:15.205643 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 05:56:15.205655 | orchestrator | Thursday 19 February 2026 05:55:44 +0000 (0:00:00.773) 0:12:30.232 ***** 2026-02-19 05:56:15.205668 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205679 | orchestrator | 2026-02-19 05:56:15.205693 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 05:56:15.205706 | orchestrator | Thursday 19 February 2026 05:55:44 +0000 (0:00:00.773) 0:12:31.006 ***** 2026-02-19 05:56:15.205718 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205757 | orchestrator | 2026-02-19 05:56:15.205770 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 05:56:15.205783 | orchestrator | Thursday 19 February 2026 05:55:45 +0000 (0:00:00.804) 0:12:31.811 ***** 2026-02-19 05:56:15.205795 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205807 | orchestrator | 2026-02-19 05:56:15.205820 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 05:56:15.205834 | orchestrator | Thursday 19 February 2026 05:55:46 +0000 (0:00:00.806) 0:12:32.617 ***** 2026-02-19 05:56:15.205846 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205858 | orchestrator | 2026-02-19 05:56:15.205871 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 05:56:15.205883 | orchestrator | Thursday 19 February 2026 05:55:47 +0000 (0:00:00.784) 0:12:33.402 ***** 2026-02-19 05:56:15.205896 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205907 | orchestrator | 2026-02-19 05:56:15.205920 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 05:56:15.205932 | orchestrator | Thursday 19 February 2026 05:55:47 +0000 (0:00:00.790) 0:12:34.193 ***** 2026-02-19 05:56:15.205944 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.205956 | orchestrator | 2026-02-19 05:56:15.205969 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 05:56:15.206009 | orchestrator | Thursday 19 February 2026 05:55:48 +0000 (0:00:00.780) 0:12:34.974 ***** 2026-02-19 05:56:15.206085 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206096 | orchestrator | 2026-02-19 05:56:15.206107 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 05:56:15.206118 | orchestrator | Thursday 19 February 2026 05:55:49 +0000 (0:00:00.761) 0:12:35.735 ***** 2026-02-19 05:56:15.206129 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206140 | orchestrator | 2026-02-19 05:56:15.206151 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 05:56:15.206162 | orchestrator | Thursday 19 February 2026 05:55:50 +0000 (0:00:00.733) 0:12:36.469 ***** 2026-02-19 05:56:15.206172 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206183 | orchestrator | 2026-02-19 05:56:15.206194 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 05:56:15.206204 | orchestrator | Thursday 19 February 2026 05:55:51 +0000 (0:00:00.799) 0:12:37.268 ***** 2026-02-19 05:56:15.206232 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206243 | orchestrator | 2026-02-19 05:56:15.206254 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 05:56:15.206265 | orchestrator | Thursday 19 February 2026 05:55:51 +0000 (0:00:00.862) 0:12:38.130 ***** 2026-02-19 05:56:15.206276 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206287 | orchestrator | 2026-02-19 05:56:15.206297 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 05:56:15.206308 | orchestrator | Thursday 19 February 2026 05:55:52 +0000 (0:00:00.764) 0:12:38.895 ***** 2026-02-19 05:56:15.206319 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206330 | orchestrator | 2026-02-19 05:56:15.206340 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 05:56:15.206351 | orchestrator | Thursday 19 February 2026 05:55:53 +0000 (0:00:00.904) 0:12:39.799 ***** 2026-02-19 05:56:15.206362 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206373 | orchestrator | 2026-02-19 05:56:15.206383 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 05:56:15.206394 | orchestrator | Thursday 19 February 2026 05:55:54 +0000 (0:00:00.757) 0:12:40.556 ***** 2026-02-19 05:56:15.206405 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206416 | orchestrator | 2026-02-19 05:56:15.206446 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 05:56:15.206469 | orchestrator | Thursday 19 February 2026 05:55:55 +0000 (0:00:00.774) 0:12:41.331 ***** 2026-02-19 05:56:15.206483 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206502 | orchestrator | 2026-02-19 05:56:15.206521 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 05:56:15.206539 | orchestrator | Thursday 19 February 2026 05:55:55 +0000 (0:00:00.797) 0:12:42.129 ***** 2026-02-19 05:56:15.206557 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206576 | orchestrator | 2026-02-19 05:56:15.206592 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 05:56:15.206609 | orchestrator | Thursday 19 February 2026 05:55:56 +0000 (0:00:00.786) 0:12:42.916 ***** 2026-02-19 05:56:15.206626 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206644 | orchestrator | 2026-02-19 05:56:15.206663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 05:56:15.206681 | orchestrator | Thursday 19 February 2026 05:55:57 +0000 (0:00:00.774) 0:12:43.691 ***** 2026-02-19 05:56:15.206699 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206718 | orchestrator | 2026-02-19 05:56:15.206738 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 05:56:15.206756 | orchestrator | Thursday 19 February 2026 05:55:58 +0000 (0:00:00.782) 0:12:44.474 ***** 2026-02-19 05:56:15.206776 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 05:56:15.206795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 05:56:15.206813 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 05:56:15.206832 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206845 | orchestrator | 2026-02-19 05:56:15.206856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 05:56:15.206867 | orchestrator | Thursday 19 February 2026 05:55:59 +0000 (0:00:01.071) 0:12:45.545 ***** 2026-02-19 05:56:15.206877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 05:56:15.206888 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 05:56:15.206898 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 05:56:15.206909 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.206920 | orchestrator | 2026-02-19 05:56:15.206930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 05:56:15.206941 | orchestrator | Thursday 19 February 2026 05:56:00 +0000 (0:00:01.010) 0:12:46.556 ***** 2026-02-19 05:56:15.206953 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 05:56:15.206972 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 05:56:15.207020 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 05:56:15.207038 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.207055 | orchestrator | 2026-02-19 05:56:15.207074 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 05:56:15.207092 | orchestrator | Thursday 19 February 2026 05:56:01 +0000 (0:00:01.073) 0:12:47.629 ***** 2026-02-19 05:56:15.207111 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.207130 | orchestrator | 2026-02-19 05:56:15.207148 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 05:56:15.207165 | orchestrator | Thursday 19 February 2026 05:56:02 +0000 (0:00:00.763) 0:12:48.392 ***** 2026-02-19 05:56:15.207177 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-19 05:56:15.207187 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.207198 | orchestrator | 2026-02-19 05:56:15.207209 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 05:56:15.207220 | orchestrator | Thursday 19 February 2026 05:56:03 +0000 (0:00:00.877) 0:12:49.270 ***** 2026-02-19 05:56:15.207230 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:56:15.207241 | orchestrator | 2026-02-19 05:56:15.207252 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-19 05:56:15.207273 | orchestrator | Thursday 19 February 2026 05:56:04 +0000 (0:00:01.420) 0:12:50.691 ***** 2026-02-19 05:56:15.207284 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:56:15.207295 | orchestrator | 2026-02-19 05:56:15.207306 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-19 05:56:15.207317 | orchestrator | Thursday 19 February 2026 05:56:05 +0000 (0:00:00.792) 0:12:51.483 ***** 2026-02-19 05:56:15.207328 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-19 05:56:15.207340 | orchestrator | 2026-02-19 05:56:15.207350 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-19 05:56:15.207368 | orchestrator | Thursday 19 February 2026 05:56:06 +0000 (0:00:01.222) 0:12:52.706 ***** 2026-02-19 05:56:15.207379 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-19 05:56:15.207390 | orchestrator | 2026-02-19 05:56:15.207401 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-19 05:56:15.207411 | orchestrator | Thursday 19 February 2026 05:56:09 +0000 (0:00:03.160) 0:12:55.866 ***** 2026-02-19 05:56:15.207422 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:56:15.207433 | orchestrator | 2026-02-19 05:56:15.207444 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-19 05:56:15.207454 | orchestrator | Thursday 19 February 2026 05:56:10 +0000 (0:00:01.159) 0:12:57.026 ***** 2026-02-19 05:56:15.207465 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:56:15.207476 | orchestrator | 2026-02-19 05:56:15.207486 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-19 05:56:15.207497 | orchestrator | Thursday 19 February 2026 05:56:11 +0000 (0:00:01.125) 0:12:58.152 ***** 2026-02-19 05:56:15.207507 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:56:15.207518 | orchestrator | 2026-02-19 05:56:15.207529 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-19 05:56:15.207539 | orchestrator | Thursday 19 February 2026 05:56:13 +0000 (0:00:01.167) 0:12:59.320 ***** 2026-02-19 05:56:15.207561 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:57:32.814810 | orchestrator | 2026-02-19 05:57:32.814969 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-19 05:57:32.814994 | orchestrator | Thursday 19 February 2026 05:56:15 +0000 (0:00:02.098) 0:13:01.418 ***** 2026-02-19 05:57:32.815059 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815073 | orchestrator | 2026-02-19 05:57:32.815085 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-19 05:57:32.815096 | orchestrator | Thursday 19 February 2026 05:56:16 +0000 (0:00:01.646) 0:13:03.064 ***** 2026-02-19 05:57:32.815107 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815118 | orchestrator | 2026-02-19 05:57:32.815129 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-19 05:57:32.815140 | orchestrator | Thursday 19 February 2026 05:56:18 +0000 (0:00:01.475) 0:13:04.540 ***** 2026-02-19 05:57:32.815152 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815163 | orchestrator | 2026-02-19 05:57:32.815173 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-19 05:57:32.815184 | orchestrator | Thursday 19 February 2026 05:56:19 +0000 (0:00:01.482) 0:13:06.023 ***** 2026-02-19 05:57:32.815196 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:57:32.815207 | orchestrator | 2026-02-19 05:57:32.815218 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-19 05:57:32.815229 | orchestrator | Thursday 19 February 2026 05:56:21 +0000 (0:00:01.633) 0:13:07.657 ***** 2026-02-19 05:57:32.815239 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:57:32.815250 | orchestrator | 2026-02-19 05:57:32.815261 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-19 05:57:32.815272 | orchestrator | Thursday 19 February 2026 05:56:22 +0000 (0:00:01.563) 0:13:09.220 ***** 2026-02-19 05:57:32.815283 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 05:57:32.815318 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-19 05:57:32.815332 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 05:57:32.815344 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-19 05:57:32.815356 | orchestrator | 2026-02-19 05:57:32.815368 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-19 05:57:32.815380 | orchestrator | Thursday 19 February 2026 05:56:27 +0000 (0:00:04.419) 0:13:13.640 ***** 2026-02-19 05:57:32.815392 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:57:32.815405 | orchestrator | 2026-02-19 05:57:32.815417 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-19 05:57:32.815429 | orchestrator | Thursday 19 February 2026 05:56:29 +0000 (0:00:02.059) 0:13:15.699 ***** 2026-02-19 05:57:32.815441 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815453 | orchestrator | 2026-02-19 05:57:32.815465 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-19 05:57:32.815477 | orchestrator | Thursday 19 February 2026 05:56:30 +0000 (0:00:01.286) 0:13:16.985 ***** 2026-02-19 05:57:32.815489 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815501 | orchestrator | 2026-02-19 05:57:32.815513 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-19 05:57:32.815526 | orchestrator | Thursday 19 February 2026 05:56:31 +0000 (0:00:01.110) 0:13:18.096 ***** 2026-02-19 05:57:32.815538 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815550 | orchestrator | 2026-02-19 05:57:32.815562 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-19 05:57:32.815575 | orchestrator | Thursday 19 February 2026 05:56:33 +0000 (0:00:01.696) 0:13:19.793 ***** 2026-02-19 05:57:32.815587 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815598 | orchestrator | 2026-02-19 05:57:32.815610 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-19 05:57:32.815623 | orchestrator | Thursday 19 February 2026 05:56:35 +0000 (0:00:01.852) 0:13:21.645 ***** 2026-02-19 05:57:32.815635 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:57:32.815647 | orchestrator | 2026-02-19 05:57:32.815658 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-19 05:57:32.815669 | orchestrator | Thursday 19 February 2026 05:56:36 +0000 (0:00:00.747) 0:13:22.392 ***** 2026-02-19 05:57:32.815679 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-19 05:57:32.815690 | orchestrator | 2026-02-19 05:57:32.815701 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-19 05:57:32.815712 | orchestrator | Thursday 19 February 2026 05:56:37 +0000 (0:00:01.107) 0:13:23.500 ***** 2026-02-19 05:57:32.815722 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:57:32.815733 | orchestrator | 2026-02-19 05:57:32.815744 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-19 05:57:32.815770 | orchestrator | Thursday 19 February 2026 05:56:38 +0000 (0:00:01.126) 0:13:24.627 ***** 2026-02-19 05:57:32.815781 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:57:32.815792 | orchestrator | 2026-02-19 05:57:32.815803 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-19 05:57:32.815814 | orchestrator | Thursday 19 February 2026 05:56:39 +0000 (0:00:01.141) 0:13:25.769 ***** 2026-02-19 05:57:32.815824 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-19 05:57:32.815835 | orchestrator | 2026-02-19 05:57:32.815846 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-19 05:57:32.815857 | orchestrator | Thursday 19 February 2026 05:56:40 +0000 (0:00:01.084) 0:13:26.854 ***** 2026-02-19 05:57:32.815867 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815878 | orchestrator | 2026-02-19 05:57:32.815889 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-19 05:57:32.815900 | orchestrator | Thursday 19 February 2026 05:56:43 +0000 (0:00:02.614) 0:13:29.468 ***** 2026-02-19 05:57:32.815920 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815931 | orchestrator | 2026-02-19 05:57:32.815942 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-19 05:57:32.815952 | orchestrator | Thursday 19 February 2026 05:56:45 +0000 (0:00:01.960) 0:13:31.429 ***** 2026-02-19 05:57:32.815984 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.815995 | orchestrator | 2026-02-19 05:57:32.816040 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-19 05:57:32.816051 | orchestrator | Thursday 19 February 2026 05:56:48 +0000 (0:00:03.475) 0:13:34.905 ***** 2026-02-19 05:57:32.816062 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:57:32.816073 | orchestrator | 2026-02-19 05:57:32.816084 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-19 05:57:32.816095 | orchestrator | Thursday 19 February 2026 05:56:51 +0000 (0:00:02.953) 0:13:37.858 ***** 2026-02-19 05:57:32.816106 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-19 05:57:32.816117 | orchestrator | 2026-02-19 05:57:32.816127 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-19 05:57:32.816138 | orchestrator | Thursday 19 February 2026 05:56:52 +0000 (0:00:01.096) 0:13:38.955 ***** 2026-02-19 05:57:32.816149 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-19 05:57:32.816160 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.816171 | orchestrator | 2026-02-19 05:57:32.816182 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-19 05:57:32.816192 | orchestrator | Thursday 19 February 2026 05:57:15 +0000 (0:00:22.994) 0:14:01.949 ***** 2026-02-19 05:57:32.816203 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:57:32.816214 | orchestrator | 2026-02-19 05:57:32.816225 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-19 05:57:32.816235 | orchestrator | Thursday 19 February 2026 05:57:18 +0000 (0:00:02.605) 0:14:04.555 ***** 2026-02-19 05:57:32.816246 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:57:32.816257 | orchestrator | 2026-02-19 05:57:32.816268 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-19 05:57:32.816278 | orchestrator | Thursday 19 February 2026 05:57:19 +0000 (0:00:00.759) 0:14:05.315 ***** 2026-02-19 05:57:32.816292 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-19 05:57:32.816307 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-19 05:57:32.816318 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-19 05:57:32.816329 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-19 05:57:32.816348 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-19 05:57:32.816368 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}])  2026-02-19 05:57:32.816381 | orchestrator | 2026-02-19 05:57:32.816392 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-19 05:57:32.816403 | orchestrator | Thursday 19 February 2026 05:57:28 +0000 (0:00:09.804) 0:14:15.119 ***** 2026-02-19 05:57:32.816414 | orchestrator | changed: [testbed-node-1] 2026-02-19 05:57:32.816425 | orchestrator | 2026-02-19 05:57:32.816436 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 05:57:32.816446 | orchestrator | Thursday 19 February 2026 05:57:31 +0000 (0:00:02.122) 0:14:17.241 ***** 2026-02-19 05:57:32.816463 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:58:07.335158 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-19 05:58:07.335251 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-19 05:58:07.335260 | orchestrator | 2026-02-19 05:58:07.335268 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 05:58:07.335275 | orchestrator | Thursday 19 February 2026 05:57:32 +0000 (0:00:01.786) 0:14:19.028 ***** 2026-02-19 05:58:07.335283 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 05:58:07.335290 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 05:58:07.335296 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 05:58:07.335303 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335309 | orchestrator | 2026-02-19 05:58:07.335316 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-19 05:58:07.335322 | orchestrator | Thursday 19 February 2026 05:57:33 +0000 (0:00:01.035) 0:14:20.063 ***** 2026-02-19 05:58:07.335328 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335335 | orchestrator | 2026-02-19 05:58:07.335341 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-19 05:58:07.335347 | orchestrator | Thursday 19 February 2026 05:57:34 +0000 (0:00:00.765) 0:14:20.829 ***** 2026-02-19 05:58:07.335353 | orchestrator | ok: [testbed-node-1] 2026-02-19 05:58:07.335360 | orchestrator | 2026-02-19 05:58:07.335367 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 05:58:07.335373 | orchestrator | Thursday 19 February 2026 05:57:36 +0000 (0:00:02.275) 0:14:23.104 ***** 2026-02-19 05:58:07.335379 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335385 | orchestrator | 2026-02-19 05:58:07.335391 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-19 05:58:07.335397 | orchestrator | Thursday 19 February 2026 05:57:37 +0000 (0:00:00.757) 0:14:23.862 ***** 2026-02-19 05:58:07.335403 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335410 | orchestrator | 2026-02-19 05:58:07.335416 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-19 05:58:07.335422 | orchestrator | Thursday 19 February 2026 05:57:38 +0000 (0:00:00.771) 0:14:24.634 ***** 2026-02-19 05:58:07.335428 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335434 | orchestrator | 2026-02-19 05:58:07.335440 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-19 05:58:07.335446 | orchestrator | Thursday 19 February 2026 05:57:39 +0000 (0:00:00.756) 0:14:25.391 ***** 2026-02-19 05:58:07.335472 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335479 | orchestrator | 2026-02-19 05:58:07.335485 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-19 05:58:07.335491 | orchestrator | Thursday 19 February 2026 05:57:39 +0000 (0:00:00.762) 0:14:26.154 ***** 2026-02-19 05:58:07.335497 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335503 | orchestrator | 2026-02-19 05:58:07.335509 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-19 05:58:07.335515 | orchestrator | Thursday 19 February 2026 05:57:40 +0000 (0:00:00.786) 0:14:26.940 ***** 2026-02-19 05:58:07.335522 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335528 | orchestrator | 2026-02-19 05:58:07.335534 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-19 05:58:07.335540 | orchestrator | Thursday 19 February 2026 05:57:41 +0000 (0:00:00.756) 0:14:27.697 ***** 2026-02-19 05:58:07.335546 | orchestrator | skipping: [testbed-node-1] 2026-02-19 05:58:07.335552 | orchestrator | 2026-02-19 05:58:07.335558 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-19 05:58:07.335564 | orchestrator | 2026-02-19 05:58:07.335570 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-19 05:58:07.335576 | orchestrator | Thursday 19 February 2026 05:57:42 +0000 (0:00:00.928) 0:14:28.625 ***** 2026-02-19 05:58:07.335583 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335589 | orchestrator | 2026-02-19 05:58:07.335595 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-19 05:58:07.335601 | orchestrator | Thursday 19 February 2026 05:57:43 +0000 (0:00:01.111) 0:14:29.737 ***** 2026-02-19 05:58:07.335607 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335613 | orchestrator | 2026-02-19 05:58:07.335619 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-19 05:58:07.335625 | orchestrator | Thursday 19 February 2026 05:57:44 +0000 (0:00:00.807) 0:14:30.544 ***** 2026-02-19 05:58:07.335632 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:07.335638 | orchestrator | 2026-02-19 05:58:07.335644 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-19 05:58:07.335661 | orchestrator | Thursday 19 February 2026 05:57:45 +0000 (0:00:00.775) 0:14:31.320 ***** 2026-02-19 05:58:07.335668 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335674 | orchestrator | 2026-02-19 05:58:07.335681 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 05:58:07.335687 | orchestrator | Thursday 19 February 2026 05:57:45 +0000 (0:00:00.808) 0:14:32.129 ***** 2026-02-19 05:58:07.335693 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-19 05:58:07.335699 | orchestrator | 2026-02-19 05:58:07.335705 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 05:58:07.335711 | orchestrator | Thursday 19 February 2026 05:57:47 +0000 (0:00:01.122) 0:14:33.252 ***** 2026-02-19 05:58:07.335717 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335723 | orchestrator | 2026-02-19 05:58:07.335730 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 05:58:07.335736 | orchestrator | Thursday 19 February 2026 05:57:48 +0000 (0:00:01.478) 0:14:34.730 ***** 2026-02-19 05:58:07.335742 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335748 | orchestrator | 2026-02-19 05:58:07.335754 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 05:58:07.335760 | orchestrator | Thursday 19 February 2026 05:57:49 +0000 (0:00:01.133) 0:14:35.864 ***** 2026-02-19 05:58:07.335766 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335772 | orchestrator | 2026-02-19 05:58:07.335790 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 05:58:07.335796 | orchestrator | Thursday 19 February 2026 05:57:51 +0000 (0:00:01.536) 0:14:37.400 ***** 2026-02-19 05:58:07.335802 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335809 | orchestrator | 2026-02-19 05:58:07.335820 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 05:58:07.335826 | orchestrator | Thursday 19 February 2026 05:57:52 +0000 (0:00:01.121) 0:14:38.522 ***** 2026-02-19 05:58:07.335832 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335838 | orchestrator | 2026-02-19 05:58:07.335845 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 05:58:07.335851 | orchestrator | Thursday 19 February 2026 05:57:53 +0000 (0:00:01.125) 0:14:39.648 ***** 2026-02-19 05:58:07.335857 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335863 | orchestrator | 2026-02-19 05:58:07.335869 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 05:58:07.335875 | orchestrator | Thursday 19 February 2026 05:57:54 +0000 (0:00:01.188) 0:14:40.836 ***** 2026-02-19 05:58:07.335881 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:07.335888 | orchestrator | 2026-02-19 05:58:07.335894 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 05:58:07.335900 | orchestrator | Thursday 19 February 2026 05:57:55 +0000 (0:00:01.126) 0:14:41.963 ***** 2026-02-19 05:58:07.335906 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335912 | orchestrator | 2026-02-19 05:58:07.335918 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 05:58:07.335925 | orchestrator | Thursday 19 February 2026 05:57:56 +0000 (0:00:01.108) 0:14:43.072 ***** 2026-02-19 05:58:07.335931 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:58:07.335937 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:58:07.335943 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 05:58:07.335949 | orchestrator | 2026-02-19 05:58:07.335955 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 05:58:07.335961 | orchestrator | Thursday 19 February 2026 05:57:58 +0000 (0:00:01.955) 0:14:45.028 ***** 2026-02-19 05:58:07.335968 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:07.335974 | orchestrator | 2026-02-19 05:58:07.335980 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 05:58:07.335986 | orchestrator | Thursday 19 February 2026 05:58:00 +0000 (0:00:01.243) 0:14:46.271 ***** 2026-02-19 05:58:07.335992 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:58:07.335998 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:58:07.336004 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 05:58:07.336031 | orchestrator | 2026-02-19 05:58:07.336038 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 05:58:07.336044 | orchestrator | Thursday 19 February 2026 05:58:03 +0000 (0:00:03.158) 0:14:49.430 ***** 2026-02-19 05:58:07.336050 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 05:58:07.336057 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 05:58:07.336063 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 05:58:07.336069 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:07.336075 | orchestrator | 2026-02-19 05:58:07.336081 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 05:58:07.336088 | orchestrator | Thursday 19 February 2026 05:58:04 +0000 (0:00:01.397) 0:14:50.828 ***** 2026-02-19 05:58:07.336095 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 05:58:07.336104 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 05:58:07.336151 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 05:58:07.336158 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:07.336164 | orchestrator | 2026-02-19 05:58:07.336171 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 05:58:07.336177 | orchestrator | Thursday 19 February 2026 05:58:06 +0000 (0:00:01.594) 0:14:52.423 ***** 2026-02-19 05:58:07.336185 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:07.336198 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:26.752577 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:26.752660 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752668 | orchestrator | 2026-02-19 05:58:26.752674 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 05:58:26.752680 | orchestrator | Thursday 19 February 2026 05:58:07 +0000 (0:00:01.121) 0:14:53.545 ***** 2026-02-19 05:58:26.752685 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 05:58:00.895625', 'end': '2026-02-19 05:58:00.950454', 'delta': '0:00:00.054829', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 05:58:26.752693 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 05:58:01.466914', 'end': '2026-02-19 05:58:01.513622', 'delta': '0:00:00.046708', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 05:58:26.752698 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '7f7671ec0784', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 05:58:02.028199', 'end': '2026-02-19 05:58:02.075915', 'delta': '0:00:00.047716', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7f7671ec0784'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 05:58:26.752716 | orchestrator | 2026-02-19 05:58:26.752721 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 05:58:26.752734 | orchestrator | Thursday 19 February 2026 05:58:08 +0000 (0:00:01.183) 0:14:54.729 ***** 2026-02-19 05:58:26.752738 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:26.752744 | orchestrator | 2026-02-19 05:58:26.752748 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 05:58:26.752752 | orchestrator | Thursday 19 February 2026 05:58:09 +0000 (0:00:01.261) 0:14:55.990 ***** 2026-02-19 05:58:26.752756 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752760 | orchestrator | 2026-02-19 05:58:26.752764 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 05:58:26.752768 | orchestrator | Thursday 19 February 2026 05:58:10 +0000 (0:00:01.227) 0:14:57.217 ***** 2026-02-19 05:58:26.752772 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:26.752775 | orchestrator | 2026-02-19 05:58:26.752779 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 05:58:26.752783 | orchestrator | Thursday 19 February 2026 05:58:12 +0000 (0:00:01.124) 0:14:58.342 ***** 2026-02-19 05:58:26.752787 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 05:58:26.752792 | orchestrator | 2026-02-19 05:58:26.752795 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 05:58:26.752799 | orchestrator | Thursday 19 February 2026 05:58:14 +0000 (0:00:02.014) 0:15:00.356 ***** 2026-02-19 05:58:26.752803 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:26.752807 | orchestrator | 2026-02-19 05:58:26.752811 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 05:58:26.752815 | orchestrator | Thursday 19 February 2026 05:58:15 +0000 (0:00:01.121) 0:15:01.477 ***** 2026-02-19 05:58:26.752839 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752844 | orchestrator | 2026-02-19 05:58:26.752848 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 05:58:26.752852 | orchestrator | Thursday 19 February 2026 05:58:16 +0000 (0:00:01.122) 0:15:02.600 ***** 2026-02-19 05:58:26.752856 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752860 | orchestrator | 2026-02-19 05:58:26.752870 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 05:58:26.752875 | orchestrator | Thursday 19 February 2026 05:58:17 +0000 (0:00:01.239) 0:15:03.839 ***** 2026-02-19 05:58:26.752879 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752883 | orchestrator | 2026-02-19 05:58:26.752886 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 05:58:26.752890 | orchestrator | Thursday 19 February 2026 05:58:18 +0000 (0:00:01.154) 0:15:04.994 ***** 2026-02-19 05:58:26.752894 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752898 | orchestrator | 2026-02-19 05:58:26.752902 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 05:58:26.752906 | orchestrator | Thursday 19 February 2026 05:58:19 +0000 (0:00:01.100) 0:15:06.094 ***** 2026-02-19 05:58:26.752910 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752914 | orchestrator | 2026-02-19 05:58:26.752918 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 05:58:26.752922 | orchestrator | Thursday 19 February 2026 05:58:21 +0000 (0:00:01.137) 0:15:07.231 ***** 2026-02-19 05:58:26.752926 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752930 | orchestrator | 2026-02-19 05:58:26.752934 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 05:58:26.752942 | orchestrator | Thursday 19 February 2026 05:58:22 +0000 (0:00:01.103) 0:15:08.335 ***** 2026-02-19 05:58:26.752946 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752950 | orchestrator | 2026-02-19 05:58:26.752953 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 05:58:26.752957 | orchestrator | Thursday 19 February 2026 05:58:23 +0000 (0:00:01.117) 0:15:09.453 ***** 2026-02-19 05:58:26.752961 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752965 | orchestrator | 2026-02-19 05:58:26.752969 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 05:58:26.752974 | orchestrator | Thursday 19 February 2026 05:58:24 +0000 (0:00:01.148) 0:15:10.601 ***** 2026-02-19 05:58:26.752978 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:26.752982 | orchestrator | 2026-02-19 05:58:26.752986 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 05:58:26.752990 | orchestrator | Thursday 19 February 2026 05:58:25 +0000 (0:00:01.132) 0:15:11.733 ***** 2026-02-19 05:58:26.752994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:58:26.752999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:58:26.753005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:58:26.753010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 05:58:26.753057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:58:26.753066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:58:27.944743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:58:27.944900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a13c58d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 05:58:27.944950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:58:27.944972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 05:58:27.944990 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:27.945009 | orchestrator | 2026-02-19 05:58:27.945150 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 05:58:27.945169 | orchestrator | Thursday 19 February 2026 05:58:26 +0000 (0:00:01.228) 0:15:12.962 ***** 2026-02-19 05:58:27.945212 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:27.945238 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:27.945249 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:27.945262 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:27.945282 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:27.945293 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:27.945305 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:27.945335 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a13c58d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:58.330360 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:58.330475 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 05:58:58.330491 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.330504 | orchestrator | 2026-02-19 05:58:58.330519 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 05:58:58.330565 | orchestrator | Thursday 19 February 2026 05:58:27 +0000 (0:00:01.196) 0:15:14.159 ***** 2026-02-19 05:58:58.330584 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:58.330601 | orchestrator | 2026-02-19 05:58:58.330619 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 05:58:58.330636 | orchestrator | Thursday 19 February 2026 05:58:29 +0000 (0:00:01.505) 0:15:15.664 ***** 2026-02-19 05:58:58.330652 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:58.330668 | orchestrator | 2026-02-19 05:58:58.330686 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 05:58:58.330696 | orchestrator | Thursday 19 February 2026 05:58:30 +0000 (0:00:01.117) 0:15:16.782 ***** 2026-02-19 05:58:58.330705 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:58:58.330715 | orchestrator | 2026-02-19 05:58:58.330725 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 05:58:58.330734 | orchestrator | Thursday 19 February 2026 05:58:31 +0000 (0:00:01.429) 0:15:18.211 ***** 2026-02-19 05:58:58.330744 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.330753 | orchestrator | 2026-02-19 05:58:58.330763 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 05:58:58.330773 | orchestrator | Thursday 19 February 2026 05:58:33 +0000 (0:00:01.138) 0:15:19.350 ***** 2026-02-19 05:58:58.330782 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.330792 | orchestrator | 2026-02-19 05:58:58.330801 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 05:58:58.330811 | orchestrator | Thursday 19 February 2026 05:58:34 +0000 (0:00:01.218) 0:15:20.569 ***** 2026-02-19 05:58:58.330820 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.330830 | orchestrator | 2026-02-19 05:58:58.330839 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 05:58:58.330849 | orchestrator | Thursday 19 February 2026 05:58:35 +0000 (0:00:01.164) 0:15:21.734 ***** 2026-02-19 05:58:58.330860 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-19 05:58:58.330871 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-19 05:58:58.330882 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 05:58:58.330898 | orchestrator | 2026-02-19 05:58:58.330915 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 05:58:58.330931 | orchestrator | Thursday 19 February 2026 05:58:37 +0000 (0:00:01.634) 0:15:23.368 ***** 2026-02-19 05:58:58.330948 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 05:58:58.330965 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 05:58:58.330982 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 05:58:58.331002 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331018 | orchestrator | 2026-02-19 05:58:58.331080 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 05:58:58.331092 | orchestrator | Thursday 19 February 2026 05:58:38 +0000 (0:00:01.142) 0:15:24.511 ***** 2026-02-19 05:58:58.331102 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331114 | orchestrator | 2026-02-19 05:58:58.331125 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 05:58:58.331136 | orchestrator | Thursday 19 February 2026 05:58:39 +0000 (0:00:01.117) 0:15:25.629 ***** 2026-02-19 05:58:58.331146 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:58:58.331158 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:58:58.331168 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 05:58:58.331179 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:58:58.331190 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:58:58.331201 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:58:58.331242 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:58:58.331258 | orchestrator | 2026-02-19 05:58:58.331276 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 05:58:58.331293 | orchestrator | Thursday 19 February 2026 05:58:41 +0000 (0:00:01.816) 0:15:27.445 ***** 2026-02-19 05:58:58.331310 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 05:58:58.331338 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 05:58:58.331356 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 05:58:58.331374 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 05:58:58.331393 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 05:58:58.331405 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 05:58:58.331415 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 05:58:58.331425 | orchestrator | 2026-02-19 05:58:58.331434 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-19 05:58:58.331443 | orchestrator | Thursday 19 February 2026 05:58:43 +0000 (0:00:02.160) 0:15:29.606 ***** 2026-02-19 05:58:58.331453 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331462 | orchestrator | 2026-02-19 05:58:58.331472 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-19 05:58:58.331481 | orchestrator | Thursday 19 February 2026 05:58:44 +0000 (0:00:00.874) 0:15:30.480 ***** 2026-02-19 05:58:58.331491 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331500 | orchestrator | 2026-02-19 05:58:58.331510 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-19 05:58:58.331519 | orchestrator | Thursday 19 February 2026 05:58:45 +0000 (0:00:00.875) 0:15:31.356 ***** 2026-02-19 05:58:58.331529 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331538 | orchestrator | 2026-02-19 05:58:58.331548 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-19 05:58:58.331557 | orchestrator | Thursday 19 February 2026 05:58:45 +0000 (0:00:00.763) 0:15:32.119 ***** 2026-02-19 05:58:58.331566 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331576 | orchestrator | 2026-02-19 05:58:58.331585 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-19 05:58:58.331595 | orchestrator | Thursday 19 February 2026 05:58:46 +0000 (0:00:00.882) 0:15:33.001 ***** 2026-02-19 05:58:58.331604 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331614 | orchestrator | 2026-02-19 05:58:58.331630 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-19 05:58:58.331647 | orchestrator | Thursday 19 February 2026 05:58:47 +0000 (0:00:00.775) 0:15:33.776 ***** 2026-02-19 05:58:58.331662 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 05:58:58.331678 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 05:58:58.331695 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 05:58:58.331711 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331729 | orchestrator | 2026-02-19 05:58:58.331746 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-19 05:58:58.331827 | orchestrator | Thursday 19 February 2026 05:58:48 +0000 (0:00:01.333) 0:15:35.110 ***** 2026-02-19 05:58:58.331837 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-19 05:58:58.331847 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-19 05:58:58.331856 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-19 05:58:58.331866 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-19 05:58:58.331876 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-19 05:58:58.331894 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-19 05:58:58.331904 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:58:58.331914 | orchestrator | 2026-02-19 05:58:58.331923 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-19 05:58:58.331933 | orchestrator | Thursday 19 February 2026 05:58:50 +0000 (0:00:01.616) 0:15:36.726 ***** 2026-02-19 05:58:58.331942 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 05:58:58.331952 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 05:58:58.331962 | orchestrator | 2026-02-19 05:58:58.331971 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-19 05:58:58.331981 | orchestrator | Thursday 19 February 2026 05:58:53 +0000 (0:00:03.342) 0:15:40.069 ***** 2026-02-19 05:58:58.331993 | orchestrator | changed: [testbed-node-2] 2026-02-19 05:58:58.332010 | orchestrator | 2026-02-19 05:58:58.332050 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 05:58:58.332068 | orchestrator | Thursday 19 February 2026 05:58:56 +0000 (0:00:02.232) 0:15:42.302 ***** 2026-02-19 05:58:58.332085 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-19 05:58:58.332104 | orchestrator | 2026-02-19 05:58:58.332123 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 05:58:58.332140 | orchestrator | Thursday 19 February 2026 05:58:57 +0000 (0:00:01.101) 0:15:43.403 ***** 2026-02-19 05:58:58.332157 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-19 05:58:58.332167 | orchestrator | 2026-02-19 05:58:58.332177 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 05:58:58.332196 | orchestrator | Thursday 19 February 2026 05:58:58 +0000 (0:00:01.134) 0:15:44.538 ***** 2026-02-19 05:59:40.281763 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.281919 | orchestrator | 2026-02-19 05:59:40.281938 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 05:59:40.281952 | orchestrator | Thursday 19 February 2026 05:58:59 +0000 (0:00:01.530) 0:15:46.068 ***** 2026-02-19 05:59:40.281963 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.281992 | orchestrator | 2026-02-19 05:59:40.282004 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 05:59:40.282173 | orchestrator | Thursday 19 February 2026 05:59:00 +0000 (0:00:01.102) 0:15:47.170 ***** 2026-02-19 05:59:40.282200 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282220 | orchestrator | 2026-02-19 05:59:40.282239 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 05:59:40.282256 | orchestrator | Thursday 19 February 2026 05:59:02 +0000 (0:00:01.109) 0:15:48.279 ***** 2026-02-19 05:59:40.282274 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282293 | orchestrator | 2026-02-19 05:59:40.282312 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 05:59:40.282330 | orchestrator | Thursday 19 February 2026 05:59:03 +0000 (0:00:01.111) 0:15:49.391 ***** 2026-02-19 05:59:40.282347 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.282364 | orchestrator | 2026-02-19 05:59:40.282382 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 05:59:40.282433 | orchestrator | Thursday 19 February 2026 05:59:04 +0000 (0:00:01.547) 0:15:50.938 ***** 2026-02-19 05:59:40.282453 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282471 | orchestrator | 2026-02-19 05:59:40.282492 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 05:59:40.282513 | orchestrator | Thursday 19 February 2026 05:59:05 +0000 (0:00:01.107) 0:15:52.046 ***** 2026-02-19 05:59:40.282531 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282550 | orchestrator | 2026-02-19 05:59:40.282564 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 05:59:40.282601 | orchestrator | Thursday 19 February 2026 05:59:06 +0000 (0:00:01.156) 0:15:53.202 ***** 2026-02-19 05:59:40.282615 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.282627 | orchestrator | 2026-02-19 05:59:40.282639 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 05:59:40.282650 | orchestrator | Thursday 19 February 2026 05:59:08 +0000 (0:00:01.549) 0:15:54.751 ***** 2026-02-19 05:59:40.282661 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.282671 | orchestrator | 2026-02-19 05:59:40.282682 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 05:59:40.282693 | orchestrator | Thursday 19 February 2026 05:59:10 +0000 (0:00:01.580) 0:15:56.332 ***** 2026-02-19 05:59:40.282704 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282714 | orchestrator | 2026-02-19 05:59:40.282725 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 05:59:40.282736 | orchestrator | Thursday 19 February 2026 05:59:10 +0000 (0:00:00.796) 0:15:57.128 ***** 2026-02-19 05:59:40.282747 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.282757 | orchestrator | 2026-02-19 05:59:40.282768 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 05:59:40.282779 | orchestrator | Thursday 19 February 2026 05:59:11 +0000 (0:00:00.764) 0:15:57.893 ***** 2026-02-19 05:59:40.282790 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282800 | orchestrator | 2026-02-19 05:59:40.282811 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 05:59:40.282821 | orchestrator | Thursday 19 February 2026 05:59:12 +0000 (0:00:00.774) 0:15:58.667 ***** 2026-02-19 05:59:40.282832 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282843 | orchestrator | 2026-02-19 05:59:40.282853 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 05:59:40.282864 | orchestrator | Thursday 19 February 2026 05:59:13 +0000 (0:00:00.773) 0:15:59.441 ***** 2026-02-19 05:59:40.282875 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282886 | orchestrator | 2026-02-19 05:59:40.282897 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 05:59:40.282908 | orchestrator | Thursday 19 February 2026 05:59:13 +0000 (0:00:00.746) 0:16:00.187 ***** 2026-02-19 05:59:40.282918 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282929 | orchestrator | 2026-02-19 05:59:40.282939 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 05:59:40.282950 | orchestrator | Thursday 19 February 2026 05:59:14 +0000 (0:00:00.794) 0:16:00.982 ***** 2026-02-19 05:59:40.282961 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.282971 | orchestrator | 2026-02-19 05:59:40.282982 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 05:59:40.282993 | orchestrator | Thursday 19 February 2026 05:59:15 +0000 (0:00:00.785) 0:16:01.767 ***** 2026-02-19 05:59:40.283003 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.283014 | orchestrator | 2026-02-19 05:59:40.283024 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 05:59:40.283035 | orchestrator | Thursday 19 February 2026 05:59:16 +0000 (0:00:00.799) 0:16:02.567 ***** 2026-02-19 05:59:40.283076 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.283087 | orchestrator | 2026-02-19 05:59:40.283097 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 05:59:40.283108 | orchestrator | Thursday 19 February 2026 05:59:17 +0000 (0:00:00.812) 0:16:03.380 ***** 2026-02-19 05:59:40.283119 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.283129 | orchestrator | 2026-02-19 05:59:40.283140 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 05:59:40.283151 | orchestrator | Thursday 19 February 2026 05:59:17 +0000 (0:00:00.796) 0:16:04.177 ***** 2026-02-19 05:59:40.283161 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283172 | orchestrator | 2026-02-19 05:59:40.283183 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 05:59:40.283203 | orchestrator | Thursday 19 February 2026 05:59:18 +0000 (0:00:00.774) 0:16:04.951 ***** 2026-02-19 05:59:40.283221 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283240 | orchestrator | 2026-02-19 05:59:40.283258 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 05:59:40.283302 | orchestrator | Thursday 19 February 2026 05:59:19 +0000 (0:00:00.754) 0:16:05.707 ***** 2026-02-19 05:59:40.283320 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283337 | orchestrator | 2026-02-19 05:59:40.283356 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 05:59:40.283375 | orchestrator | Thursday 19 February 2026 05:59:20 +0000 (0:00:00.781) 0:16:06.488 ***** 2026-02-19 05:59:40.283393 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283413 | orchestrator | 2026-02-19 05:59:40.283473 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 05:59:40.283490 | orchestrator | Thursday 19 February 2026 05:59:21 +0000 (0:00:00.747) 0:16:07.236 ***** 2026-02-19 05:59:40.283510 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283526 | orchestrator | 2026-02-19 05:59:40.283537 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 05:59:40.283548 | orchestrator | Thursday 19 February 2026 05:59:21 +0000 (0:00:00.762) 0:16:07.998 ***** 2026-02-19 05:59:40.283559 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283569 | orchestrator | 2026-02-19 05:59:40.283580 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 05:59:40.283591 | orchestrator | Thursday 19 February 2026 05:59:22 +0000 (0:00:00.764) 0:16:08.762 ***** 2026-02-19 05:59:40.283601 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283612 | orchestrator | 2026-02-19 05:59:40.283623 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 05:59:40.283635 | orchestrator | Thursday 19 February 2026 05:59:23 +0000 (0:00:00.751) 0:16:09.514 ***** 2026-02-19 05:59:40.283645 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283656 | orchestrator | 2026-02-19 05:59:40.283667 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 05:59:40.283682 | orchestrator | Thursday 19 February 2026 05:59:24 +0000 (0:00:00.758) 0:16:10.273 ***** 2026-02-19 05:59:40.283699 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283717 | orchestrator | 2026-02-19 05:59:40.283735 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 05:59:40.283753 | orchestrator | Thursday 19 February 2026 05:59:24 +0000 (0:00:00.758) 0:16:11.032 ***** 2026-02-19 05:59:40.283768 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283783 | orchestrator | 2026-02-19 05:59:40.283801 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 05:59:40.283818 | orchestrator | Thursday 19 February 2026 05:59:25 +0000 (0:00:00.760) 0:16:11.792 ***** 2026-02-19 05:59:40.283835 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283852 | orchestrator | 2026-02-19 05:59:40.283868 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 05:59:40.283885 | orchestrator | Thursday 19 February 2026 05:59:26 +0000 (0:00:00.781) 0:16:12.574 ***** 2026-02-19 05:59:40.283901 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.283919 | orchestrator | 2026-02-19 05:59:40.283935 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 05:59:40.283951 | orchestrator | Thursday 19 February 2026 05:59:27 +0000 (0:00:00.781) 0:16:13.356 ***** 2026-02-19 05:59:40.283968 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.283985 | orchestrator | 2026-02-19 05:59:40.284000 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 05:59:40.284016 | orchestrator | Thursday 19 February 2026 05:59:28 +0000 (0:00:01.603) 0:16:14.959 ***** 2026-02-19 05:59:40.284031 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.284078 | orchestrator | 2026-02-19 05:59:40.284096 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 05:59:40.284127 | orchestrator | Thursday 19 February 2026 05:59:30 +0000 (0:00:02.126) 0:16:17.085 ***** 2026-02-19 05:59:40.284144 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-19 05:59:40.284161 | orchestrator | 2026-02-19 05:59:40.284178 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 05:59:40.284194 | orchestrator | Thursday 19 February 2026 05:59:31 +0000 (0:00:01.133) 0:16:18.219 ***** 2026-02-19 05:59:40.284211 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.284228 | orchestrator | 2026-02-19 05:59:40.284245 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 05:59:40.284262 | orchestrator | Thursday 19 February 2026 05:59:33 +0000 (0:00:01.130) 0:16:19.349 ***** 2026-02-19 05:59:40.284279 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.284296 | orchestrator | 2026-02-19 05:59:40.284313 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 05:59:40.284329 | orchestrator | Thursday 19 February 2026 05:59:34 +0000 (0:00:01.115) 0:16:20.465 ***** 2026-02-19 05:59:40.284346 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 05:59:40.284364 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 05:59:40.284383 | orchestrator | 2026-02-19 05:59:40.284401 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 05:59:40.284420 | orchestrator | Thursday 19 February 2026 05:59:36 +0000 (0:00:01.888) 0:16:22.353 ***** 2026-02-19 05:59:40.284439 | orchestrator | ok: [testbed-node-2] 2026-02-19 05:59:40.284457 | orchestrator | 2026-02-19 05:59:40.284475 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 05:59:40.284486 | orchestrator | Thursday 19 February 2026 05:59:37 +0000 (0:00:01.475) 0:16:23.829 ***** 2026-02-19 05:59:40.284497 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.284508 | orchestrator | 2026-02-19 05:59:40.284519 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 05:59:40.284529 | orchestrator | Thursday 19 February 2026 05:59:38 +0000 (0:00:01.150) 0:16:24.980 ***** 2026-02-19 05:59:40.284540 | orchestrator | skipping: [testbed-node-2] 2026-02-19 05:59:40.284551 | orchestrator | 2026-02-19 05:59:40.284561 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 05:59:40.284572 | orchestrator | Thursday 19 February 2026 05:59:39 +0000 (0:00:00.762) 0:16:25.742 ***** 2026-02-19 05:59:40.284601 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.982728 | orchestrator | 2026-02-19 06:00:19.982845 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:00:19.982862 | orchestrator | Thursday 19 February 2026 05:59:40 +0000 (0:00:00.754) 0:16:26.497 ***** 2026-02-19 06:00:19.982873 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-19 06:00:19.982884 | orchestrator | 2026-02-19 06:00:19.982910 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:00:19.982920 | orchestrator | Thursday 19 February 2026 05:59:41 +0000 (0:00:01.132) 0:16:27.630 ***** 2026-02-19 06:00:19.982930 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:00:19.982942 | orchestrator | 2026-02-19 06:00:19.982952 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:00:19.982962 | orchestrator | Thursday 19 February 2026 05:59:43 +0000 (0:00:01.767) 0:16:29.397 ***** 2026-02-19 06:00:19.982972 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:00:19.982982 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:00:19.982991 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:00:19.983001 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983012 | orchestrator | 2026-02-19 06:00:19.983022 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:00:19.983135 | orchestrator | Thursday 19 February 2026 05:59:44 +0000 (0:00:01.122) 0:16:30.520 ***** 2026-02-19 06:00:19.983150 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983160 | orchestrator | 2026-02-19 06:00:19.983173 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:00:19.983195 | orchestrator | Thursday 19 February 2026 05:59:45 +0000 (0:00:01.157) 0:16:31.677 ***** 2026-02-19 06:00:19.983220 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983238 | orchestrator | 2026-02-19 06:00:19.983255 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:00:19.983271 | orchestrator | Thursday 19 February 2026 05:59:46 +0000 (0:00:01.157) 0:16:32.835 ***** 2026-02-19 06:00:19.983288 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983306 | orchestrator | 2026-02-19 06:00:19.983325 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:00:19.983342 | orchestrator | Thursday 19 February 2026 05:59:47 +0000 (0:00:01.123) 0:16:33.959 ***** 2026-02-19 06:00:19.983360 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983379 | orchestrator | 2026-02-19 06:00:19.983398 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:00:19.983412 | orchestrator | Thursday 19 February 2026 05:59:48 +0000 (0:00:01.125) 0:16:35.084 ***** 2026-02-19 06:00:19.983425 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983436 | orchestrator | 2026-02-19 06:00:19.983446 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:00:19.983456 | orchestrator | Thursday 19 February 2026 05:59:49 +0000 (0:00:00.773) 0:16:35.858 ***** 2026-02-19 06:00:19.983465 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:00:19.983476 | orchestrator | 2026-02-19 06:00:19.983485 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:00:19.983495 | orchestrator | Thursday 19 February 2026 05:59:51 +0000 (0:00:02.307) 0:16:38.166 ***** 2026-02-19 06:00:19.983505 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:00:19.983514 | orchestrator | 2026-02-19 06:00:19.983524 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:00:19.983533 | orchestrator | Thursday 19 February 2026 05:59:52 +0000 (0:00:00.785) 0:16:38.951 ***** 2026-02-19 06:00:19.983543 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-19 06:00:19.983553 | orchestrator | 2026-02-19 06:00:19.983562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:00:19.983572 | orchestrator | Thursday 19 February 2026 05:59:53 +0000 (0:00:01.094) 0:16:40.046 ***** 2026-02-19 06:00:19.983581 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983591 | orchestrator | 2026-02-19 06:00:19.983601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:00:19.983610 | orchestrator | Thursday 19 February 2026 05:59:54 +0000 (0:00:01.124) 0:16:41.171 ***** 2026-02-19 06:00:19.983620 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983630 | orchestrator | 2026-02-19 06:00:19.983639 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:00:19.983649 | orchestrator | Thursday 19 February 2026 05:59:56 +0000 (0:00:01.167) 0:16:42.339 ***** 2026-02-19 06:00:19.983658 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983668 | orchestrator | 2026-02-19 06:00:19.983678 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:00:19.983687 | orchestrator | Thursday 19 February 2026 05:59:57 +0000 (0:00:01.128) 0:16:43.468 ***** 2026-02-19 06:00:19.983697 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983706 | orchestrator | 2026-02-19 06:00:19.983716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:00:19.983725 | orchestrator | Thursday 19 February 2026 05:59:58 +0000 (0:00:01.140) 0:16:44.608 ***** 2026-02-19 06:00:19.983735 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983756 | orchestrator | 2026-02-19 06:00:19.983766 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:00:19.983775 | orchestrator | Thursday 19 February 2026 05:59:59 +0000 (0:00:01.126) 0:16:45.735 ***** 2026-02-19 06:00:19.983785 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983794 | orchestrator | 2026-02-19 06:00:19.983804 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:00:19.983813 | orchestrator | Thursday 19 February 2026 06:00:00 +0000 (0:00:01.137) 0:16:46.873 ***** 2026-02-19 06:00:19.983823 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983832 | orchestrator | 2026-02-19 06:00:19.983842 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:00:19.983872 | orchestrator | Thursday 19 February 2026 06:00:01 +0000 (0:00:01.196) 0:16:48.069 ***** 2026-02-19 06:00:19.983882 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.983892 | orchestrator | 2026-02-19 06:00:19.983929 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:00:19.983967 | orchestrator | Thursday 19 February 2026 06:00:02 +0000 (0:00:01.154) 0:16:49.224 ***** 2026-02-19 06:00:19.983984 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:00:19.984000 | orchestrator | 2026-02-19 06:00:19.984026 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:00:19.984041 | orchestrator | Thursday 19 February 2026 06:00:03 +0000 (0:00:00.802) 0:16:50.027 ***** 2026-02-19 06:00:19.984081 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-19 06:00:19.984099 | orchestrator | 2026-02-19 06:00:19.984113 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:00:19.984130 | orchestrator | Thursday 19 February 2026 06:00:04 +0000 (0:00:01.092) 0:16:51.120 ***** 2026-02-19 06:00:19.984146 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-19 06:00:19.984163 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-19 06:00:19.984181 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-19 06:00:19.984199 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-19 06:00:19.984214 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-19 06:00:19.984230 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-19 06:00:19.984241 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-19 06:00:19.984250 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:00:19.984260 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:00:19.984269 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:00:19.984279 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:00:19.984288 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:00:19.984298 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:00:19.984307 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:00:19.984317 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-19 06:00:19.984326 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-19 06:00:19.984336 | orchestrator | 2026-02-19 06:00:19.984346 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:00:19.984356 | orchestrator | Thursday 19 February 2026 06:00:11 +0000 (0:00:06.557) 0:16:57.677 ***** 2026-02-19 06:00:19.984365 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984375 | orchestrator | 2026-02-19 06:00:19.984385 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:00:19.984394 | orchestrator | Thursday 19 February 2026 06:00:12 +0000 (0:00:00.761) 0:16:58.438 ***** 2026-02-19 06:00:19.984404 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984413 | orchestrator | 2026-02-19 06:00:19.984423 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:00:19.984443 | orchestrator | Thursday 19 February 2026 06:00:12 +0000 (0:00:00.786) 0:16:59.225 ***** 2026-02-19 06:00:19.984453 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984462 | orchestrator | 2026-02-19 06:00:19.984472 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:00:19.984481 | orchestrator | Thursday 19 February 2026 06:00:13 +0000 (0:00:00.768) 0:16:59.994 ***** 2026-02-19 06:00:19.984505 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984528 | orchestrator | 2026-02-19 06:00:19.984547 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:00:19.984563 | orchestrator | Thursday 19 February 2026 06:00:14 +0000 (0:00:00.809) 0:17:00.803 ***** 2026-02-19 06:00:19.984578 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984594 | orchestrator | 2026-02-19 06:00:19.984609 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:00:19.984623 | orchestrator | Thursday 19 February 2026 06:00:15 +0000 (0:00:00.771) 0:17:01.575 ***** 2026-02-19 06:00:19.984638 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984656 | orchestrator | 2026-02-19 06:00:19.984671 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:00:19.984689 | orchestrator | Thursday 19 February 2026 06:00:16 +0000 (0:00:00.756) 0:17:02.331 ***** 2026-02-19 06:00:19.984707 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984724 | orchestrator | 2026-02-19 06:00:19.984736 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:00:19.984746 | orchestrator | Thursday 19 February 2026 06:00:16 +0000 (0:00:00.771) 0:17:03.103 ***** 2026-02-19 06:00:19.984755 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984765 | orchestrator | 2026-02-19 06:00:19.984775 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:00:19.984784 | orchestrator | Thursday 19 February 2026 06:00:17 +0000 (0:00:00.774) 0:17:03.878 ***** 2026-02-19 06:00:19.984798 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984814 | orchestrator | 2026-02-19 06:00:19.984828 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:00:19.984843 | orchestrator | Thursday 19 February 2026 06:00:18 +0000 (0:00:00.796) 0:17:04.674 ***** 2026-02-19 06:00:19.984857 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984872 | orchestrator | 2026-02-19 06:00:19.984886 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:00:19.984901 | orchestrator | Thursday 19 February 2026 06:00:19 +0000 (0:00:00.754) 0:17:05.429 ***** 2026-02-19 06:00:19.984915 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:00:19.984929 | orchestrator | 2026-02-19 06:00:19.984960 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:01:06.202440 | orchestrator | Thursday 19 February 2026 06:00:19 +0000 (0:00:00.766) 0:17:06.195 ***** 2026-02-19 06:01:06.202560 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202576 | orchestrator | 2026-02-19 06:01:06.202590 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:01:06.202618 | orchestrator | Thursday 19 February 2026 06:00:20 +0000 (0:00:00.749) 0:17:06.945 ***** 2026-02-19 06:01:06.202630 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202641 | orchestrator | 2026-02-19 06:01:06.202653 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:01:06.202664 | orchestrator | Thursday 19 February 2026 06:00:21 +0000 (0:00:00.839) 0:17:07.784 ***** 2026-02-19 06:01:06.202675 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202685 | orchestrator | 2026-02-19 06:01:06.202696 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:01:06.202707 | orchestrator | Thursday 19 February 2026 06:00:22 +0000 (0:00:00.764) 0:17:08.549 ***** 2026-02-19 06:01:06.202718 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202752 | orchestrator | 2026-02-19 06:01:06.202764 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:01:06.202775 | orchestrator | Thursday 19 February 2026 06:00:23 +0000 (0:00:00.843) 0:17:09.393 ***** 2026-02-19 06:01:06.202785 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202796 | orchestrator | 2026-02-19 06:01:06.202807 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:01:06.202817 | orchestrator | Thursday 19 February 2026 06:00:23 +0000 (0:00:00.761) 0:17:10.155 ***** 2026-02-19 06:01:06.202828 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202839 | orchestrator | 2026-02-19 06:01:06.202851 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:01:06.202864 | orchestrator | Thursday 19 February 2026 06:00:24 +0000 (0:00:00.764) 0:17:10.919 ***** 2026-02-19 06:01:06.202875 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202886 | orchestrator | 2026-02-19 06:01:06.202897 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:01:06.202908 | orchestrator | Thursday 19 February 2026 06:00:25 +0000 (0:00:00.784) 0:17:11.704 ***** 2026-02-19 06:01:06.202918 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202929 | orchestrator | 2026-02-19 06:01:06.202940 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:01:06.202950 | orchestrator | Thursday 19 February 2026 06:00:26 +0000 (0:00:00.788) 0:17:12.493 ***** 2026-02-19 06:01:06.202961 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.202972 | orchestrator | 2026-02-19 06:01:06.202984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:01:06.202997 | orchestrator | Thursday 19 February 2026 06:00:27 +0000 (0:00:00.766) 0:17:13.260 ***** 2026-02-19 06:01:06.203010 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.203022 | orchestrator | 2026-02-19 06:01:06.203035 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:01:06.203052 | orchestrator | Thursday 19 February 2026 06:00:27 +0000 (0:00:00.774) 0:17:14.035 ***** 2026-02-19 06:01:06.203071 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:01:06.203122 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:01:06.203141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:01:06.203161 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.203180 | orchestrator | 2026-02-19 06:01:06.203193 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:01:06.203204 | orchestrator | Thursday 19 February 2026 06:00:28 +0000 (0:00:01.044) 0:17:15.079 ***** 2026-02-19 06:01:06.203214 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:01:06.203225 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:01:06.203236 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:01:06.203246 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.203257 | orchestrator | 2026-02-19 06:01:06.203268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:01:06.203279 | orchestrator | Thursday 19 February 2026 06:00:29 +0000 (0:00:01.032) 0:17:16.112 ***** 2026-02-19 06:01:06.203290 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:01:06.203300 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:01:06.203311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:01:06.203321 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.203332 | orchestrator | 2026-02-19 06:01:06.203342 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:01:06.203353 | orchestrator | Thursday 19 February 2026 06:00:30 +0000 (0:00:01.018) 0:17:17.131 ***** 2026-02-19 06:01:06.203364 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.203383 | orchestrator | 2026-02-19 06:01:06.203394 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:01:06.203405 | orchestrator | Thursday 19 February 2026 06:00:31 +0000 (0:00:00.799) 0:17:17.930 ***** 2026-02-19 06:01:06.203416 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-19 06:01:06.203427 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.203438 | orchestrator | 2026-02-19 06:01:06.203449 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:01:06.203460 | orchestrator | Thursday 19 February 2026 06:00:32 +0000 (0:00:00.877) 0:17:18.808 ***** 2026-02-19 06:01:06.203470 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:01:06.203481 | orchestrator | 2026-02-19 06:01:06.203492 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:01:06.203502 | orchestrator | Thursday 19 February 2026 06:00:34 +0000 (0:00:01.432) 0:17:20.240 ***** 2026-02-19 06:01:06.203513 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.203524 | orchestrator | 2026-02-19 06:01:06.203539 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-19 06:01:06.203579 | orchestrator | Thursday 19 February 2026 06:00:34 +0000 (0:00:00.786) 0:17:21.027 ***** 2026-02-19 06:01:06.203600 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-19 06:01:06.203613 | orchestrator | 2026-02-19 06:01:06.203624 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-19 06:01:06.203643 | orchestrator | Thursday 19 February 2026 06:00:35 +0000 (0:00:01.199) 0:17:22.226 ***** 2026-02-19 06:01:06.203654 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.203664 | orchestrator | 2026-02-19 06:01:06.203675 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-19 06:01:06.203686 | orchestrator | Thursday 19 February 2026 06:00:39 +0000 (0:00:03.169) 0:17:25.396 ***** 2026-02-19 06:01:06.203697 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.203708 | orchestrator | 2026-02-19 06:01:06.203718 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-19 06:01:06.203729 | orchestrator | Thursday 19 February 2026 06:00:40 +0000 (0:00:01.135) 0:17:26.532 ***** 2026-02-19 06:01:06.203740 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.203751 | orchestrator | 2026-02-19 06:01:06.203761 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-19 06:01:06.203772 | orchestrator | Thursday 19 February 2026 06:00:41 +0000 (0:00:01.151) 0:17:27.683 ***** 2026-02-19 06:01:06.203783 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.203793 | orchestrator | 2026-02-19 06:01:06.203804 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-19 06:01:06.203815 | orchestrator | Thursday 19 February 2026 06:00:42 +0000 (0:00:01.127) 0:17:28.810 ***** 2026-02-19 06:01:06.203825 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:01:06.203836 | orchestrator | 2026-02-19 06:01:06.203847 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-19 06:01:06.203857 | orchestrator | Thursday 19 February 2026 06:00:44 +0000 (0:00:02.124) 0:17:30.935 ***** 2026-02-19 06:01:06.203868 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.203878 | orchestrator | 2026-02-19 06:01:06.203889 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-19 06:01:06.203900 | orchestrator | Thursday 19 February 2026 06:00:46 +0000 (0:00:01.628) 0:17:32.564 ***** 2026-02-19 06:01:06.203910 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.203921 | orchestrator | 2026-02-19 06:01:06.203932 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-19 06:01:06.203943 | orchestrator | Thursday 19 February 2026 06:00:47 +0000 (0:00:01.468) 0:17:34.032 ***** 2026-02-19 06:01:06.203953 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.203964 | orchestrator | 2026-02-19 06:01:06.203975 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-19 06:01:06.203986 | orchestrator | Thursday 19 February 2026 06:00:49 +0000 (0:00:01.454) 0:17:35.487 ***** 2026-02-19 06:01:06.204005 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:01:06.204016 | orchestrator | 2026-02-19 06:01:06.204026 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-19 06:01:06.204037 | orchestrator | Thursday 19 February 2026 06:00:50 +0000 (0:00:01.619) 0:17:37.107 ***** 2026-02-19 06:01:06.204048 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:01:06.204058 | orchestrator | 2026-02-19 06:01:06.204069 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-19 06:01:06.204112 | orchestrator | Thursday 19 February 2026 06:00:52 +0000 (0:00:01.543) 0:17:38.650 ***** 2026-02-19 06:01:06.204130 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:01:06.204158 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-19 06:01:06.204176 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-19 06:01:06.204193 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-19 06:01:06.204211 | orchestrator | 2026-02-19 06:01:06.204227 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-19 06:01:06.204243 | orchestrator | Thursday 19 February 2026 06:00:56 +0000 (0:00:04.342) 0:17:42.993 ***** 2026-02-19 06:01:06.204260 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:01:06.204279 | orchestrator | 2026-02-19 06:01:06.204297 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-19 06:01:06.204314 | orchestrator | Thursday 19 February 2026 06:00:58 +0000 (0:00:02.007) 0:17:45.001 ***** 2026-02-19 06:01:06.204331 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.204350 | orchestrator | 2026-02-19 06:01:06.204367 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-19 06:01:06.204383 | orchestrator | Thursday 19 February 2026 06:00:59 +0000 (0:00:01.145) 0:17:46.147 ***** 2026-02-19 06:01:06.204401 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.204419 | orchestrator | 2026-02-19 06:01:06.204437 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-19 06:01:06.204455 | orchestrator | Thursday 19 February 2026 06:01:01 +0000 (0:00:01.127) 0:17:47.274 ***** 2026-02-19 06:01:06.204474 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.204493 | orchestrator | 2026-02-19 06:01:06.204513 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-19 06:01:06.204526 | orchestrator | Thursday 19 February 2026 06:01:02 +0000 (0:00:01.743) 0:17:49.017 ***** 2026-02-19 06:01:06.204537 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:01:06.204547 | orchestrator | 2026-02-19 06:01:06.204558 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-19 06:01:06.204568 | orchestrator | Thursday 19 February 2026 06:01:04 +0000 (0:00:01.449) 0:17:50.467 ***** 2026-02-19 06:01:06.204579 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:01:06.204590 | orchestrator | 2026-02-19 06:01:06.204600 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-19 06:01:06.204611 | orchestrator | Thursday 19 February 2026 06:01:05 +0000 (0:00:00.783) 0:17:51.250 ***** 2026-02-19 06:01:06.204622 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-19 06:01:06.204633 | orchestrator | 2026-02-19 06:01:06.204656 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-19 06:02:13.109403 | orchestrator | Thursday 19 February 2026 06:01:06 +0000 (0:00:01.164) 0:17:52.414 ***** 2026-02-19 06:02:13.109499 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.109509 | orchestrator | 2026-02-19 06:02:13.109516 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-19 06:02:13.109536 | orchestrator | Thursday 19 February 2026 06:01:07 +0000 (0:00:01.081) 0:17:53.496 ***** 2026-02-19 06:02:13.109543 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.109549 | orchestrator | 2026-02-19 06:02:13.109577 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-19 06:02:13.109608 | orchestrator | Thursday 19 February 2026 06:01:08 +0000 (0:00:01.149) 0:17:54.646 ***** 2026-02-19 06:02:13.109619 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-19 06:02:13.109629 | orchestrator | 2026-02-19 06:02:13.109640 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-19 06:02:13.109650 | orchestrator | Thursday 19 February 2026 06:01:09 +0000 (0:00:01.113) 0:17:55.760 ***** 2026-02-19 06:02:13.109660 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:02:13.109671 | orchestrator | 2026-02-19 06:02:13.109681 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-19 06:02:13.109692 | orchestrator | Thursday 19 February 2026 06:01:12 +0000 (0:00:02.633) 0:17:58.393 ***** 2026-02-19 06:02:13.109702 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:02:13.109713 | orchestrator | 2026-02-19 06:02:13.109724 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-19 06:02:13.109734 | orchestrator | Thursday 19 February 2026 06:01:14 +0000 (0:00:01.910) 0:18:00.303 ***** 2026-02-19 06:02:13.109745 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:02:13.109753 | orchestrator | 2026-02-19 06:02:13.109759 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-19 06:02:13.109765 | orchestrator | Thursday 19 February 2026 06:01:16 +0000 (0:00:02.504) 0:18:02.807 ***** 2026-02-19 06:02:13.109771 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:02:13.109777 | orchestrator | 2026-02-19 06:02:13.109782 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-19 06:02:13.109788 | orchestrator | Thursday 19 February 2026 06:01:19 +0000 (0:00:02.995) 0:18:05.803 ***** 2026-02-19 06:02:13.109794 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-19 06:02:13.109800 | orchestrator | 2026-02-19 06:02:13.109806 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-19 06:02:13.109812 | orchestrator | Thursday 19 February 2026 06:01:20 +0000 (0:00:01.133) 0:18:06.936 ***** 2026-02-19 06:02:13.109818 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-19 06:02:13.109824 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:02:13.109830 | orchestrator | 2026-02-19 06:02:13.109836 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-19 06:02:13.109842 | orchestrator | Thursday 19 February 2026 06:01:43 +0000 (0:00:22.898) 0:18:29.835 ***** 2026-02-19 06:02:13.109848 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:02:13.109853 | orchestrator | 2026-02-19 06:02:13.109859 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-19 06:02:13.109868 | orchestrator | Thursday 19 February 2026 06:01:46 +0000 (0:00:02.689) 0:18:32.525 ***** 2026-02-19 06:02:13.109877 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.109887 | orchestrator | 2026-02-19 06:02:13.109896 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-19 06:02:13.109905 | orchestrator | Thursday 19 February 2026 06:01:47 +0000 (0:00:00.776) 0:18:33.302 ***** 2026-02-19 06:02:13.109918 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-19 06:02:13.109930 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-19 06:02:13.109940 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-19 06:02:13.109960 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-19 06:02:13.109999 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-19 06:02:13.110064 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__287a5286bfb3b3a67f1e3f0d4602fb8fabfb18ae'}])  2026-02-19 06:02:13.110075 | orchestrator | 2026-02-19 06:02:13.110082 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-19 06:02:13.110089 | orchestrator | Thursday 19 February 2026 06:01:56 +0000 (0:00:09.746) 0:18:43.048 ***** 2026-02-19 06:02:13.110096 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:02:13.110103 | orchestrator | 2026-02-19 06:02:13.110110 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:02:13.110116 | orchestrator | Thursday 19 February 2026 06:01:58 +0000 (0:00:02.175) 0:18:45.223 ***** 2026-02-19 06:02:13.110144 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:02:13.110150 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-19 06:02:13.110155 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-19 06:02:13.110161 | orchestrator | 2026-02-19 06:02:13.110167 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:02:13.110173 | orchestrator | Thursday 19 February 2026 06:02:00 +0000 (0:00:01.821) 0:18:47.045 ***** 2026-02-19 06:02:13.110179 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 06:02:13.110185 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 06:02:13.110190 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 06:02:13.110196 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110202 | orchestrator | 2026-02-19 06:02:13.110208 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-19 06:02:13.110214 | orchestrator | Thursday 19 February 2026 06:02:02 +0000 (0:00:01.381) 0:18:48.426 ***** 2026-02-19 06:02:13.110219 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110232 | orchestrator | 2026-02-19 06:02:13.110238 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-19 06:02:13.110244 | orchestrator | Thursday 19 February 2026 06:02:02 +0000 (0:00:00.768) 0:18:49.195 ***** 2026-02-19 06:02:13.110250 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:02:13.110256 | orchestrator | 2026-02-19 06:02:13.110261 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 06:02:13.110267 | orchestrator | Thursday 19 February 2026 06:02:04 +0000 (0:00:01.951) 0:18:51.147 ***** 2026-02-19 06:02:13.110273 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110279 | orchestrator | 2026-02-19 06:02:13.110290 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-19 06:02:13.110296 | orchestrator | Thursday 19 February 2026 06:02:05 +0000 (0:00:00.792) 0:18:51.939 ***** 2026-02-19 06:02:13.110302 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110307 | orchestrator | 2026-02-19 06:02:13.110313 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-19 06:02:13.110319 | orchestrator | Thursday 19 February 2026 06:02:06 +0000 (0:00:00.763) 0:18:52.702 ***** 2026-02-19 06:02:13.110325 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110330 | orchestrator | 2026-02-19 06:02:13.110336 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-19 06:02:13.110342 | orchestrator | Thursday 19 February 2026 06:02:07 +0000 (0:00:00.786) 0:18:53.489 ***** 2026-02-19 06:02:13.110348 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110354 | orchestrator | 2026-02-19 06:02:13.110359 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-19 06:02:13.110365 | orchestrator | Thursday 19 February 2026 06:02:08 +0000 (0:00:00.783) 0:18:54.273 ***** 2026-02-19 06:02:13.110371 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110376 | orchestrator | 2026-02-19 06:02:13.110382 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-19 06:02:13.110390 | orchestrator | Thursday 19 February 2026 06:02:08 +0000 (0:00:00.807) 0:18:55.081 ***** 2026-02-19 06:02:13.110399 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110407 | orchestrator | 2026-02-19 06:02:13.110420 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-19 06:02:13.110433 | orchestrator | Thursday 19 February 2026 06:02:09 +0000 (0:00:00.748) 0:18:55.829 ***** 2026-02-19 06:02:13.110442 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:02:13.110452 | orchestrator | 2026-02-19 06:02:13.110460 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-19 06:02:13.110470 | orchestrator | 2026-02-19 06:02:13.110479 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-19 06:02:13.110488 | orchestrator | Thursday 19 February 2026 06:02:11 +0000 (0:00:01.778) 0:18:57.607 ***** 2026-02-19 06:02:13.110497 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:02:13.110506 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:02:13.110516 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:02:13.110526 | orchestrator | 2026-02-19 06:02:13.110535 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-19 06:02:13.110545 | orchestrator | 2026-02-19 06:02:13.110555 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-19 06:02:13.110573 | orchestrator | Thursday 19 February 2026 06:02:13 +0000 (0:00:01.709) 0:18:59.317 ***** 2026-02-19 06:02:57.048833 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.048954 | orchestrator | 2026-02-19 06:02:57.048978 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:02:57.048997 | orchestrator | Thursday 19 February 2026 06:02:14 +0000 (0:00:01.115) 0:19:00.432 ***** 2026-02-19 06:02:57.049029 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049039 | orchestrator | 2026-02-19 06:02:57.049049 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:02:57.049058 | orchestrator | Thursday 19 February 2026 06:02:15 +0000 (0:00:01.124) 0:19:01.557 ***** 2026-02-19 06:02:57.049067 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049075 | orchestrator | 2026-02-19 06:02:57.049084 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:02:57.049093 | orchestrator | Thursday 19 February 2026 06:02:16 +0000 (0:00:01.130) 0:19:02.688 ***** 2026-02-19 06:02:57.049101 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049110 | orchestrator | 2026-02-19 06:02:57.049119 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:02:57.049127 | orchestrator | Thursday 19 February 2026 06:02:17 +0000 (0:00:01.120) 0:19:03.808 ***** 2026-02-19 06:02:57.049247 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049260 | orchestrator | 2026-02-19 06:02:57.049269 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:02:57.049278 | orchestrator | Thursday 19 February 2026 06:02:18 +0000 (0:00:01.096) 0:19:04.905 ***** 2026-02-19 06:02:57.049286 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049295 | orchestrator | 2026-02-19 06:02:57.049304 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:02:57.049312 | orchestrator | Thursday 19 February 2026 06:02:19 +0000 (0:00:01.095) 0:19:06.000 ***** 2026-02-19 06:02:57.049321 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049330 | orchestrator | 2026-02-19 06:02:57.049338 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:02:57.049346 | orchestrator | Thursday 19 February 2026 06:02:20 +0000 (0:00:01.122) 0:19:07.123 ***** 2026-02-19 06:02:57.049355 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049364 | orchestrator | 2026-02-19 06:02:57.049374 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:02:57.049385 | orchestrator | Thursday 19 February 2026 06:02:22 +0000 (0:00:01.111) 0:19:08.234 ***** 2026-02-19 06:02:57.049395 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049406 | orchestrator | 2026-02-19 06:02:57.049416 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:02:57.049426 | orchestrator | Thursday 19 February 2026 06:02:23 +0000 (0:00:01.117) 0:19:09.352 ***** 2026-02-19 06:02:57.049437 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049447 | orchestrator | 2026-02-19 06:02:57.049457 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:02:57.049467 | orchestrator | Thursday 19 February 2026 06:02:24 +0000 (0:00:01.092) 0:19:10.444 ***** 2026-02-19 06:02:57.049478 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049488 | orchestrator | 2026-02-19 06:02:57.049498 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:02:57.049508 | orchestrator | Thursday 19 February 2026 06:02:25 +0000 (0:00:01.104) 0:19:11.549 ***** 2026-02-19 06:02:57.049518 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049528 | orchestrator | 2026-02-19 06:02:57.049538 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:02:57.049549 | orchestrator | Thursday 19 February 2026 06:02:26 +0000 (0:00:01.060) 0:19:12.610 ***** 2026-02-19 06:02:57.049559 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049569 | orchestrator | 2026-02-19 06:02:57.049579 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:02:57.049589 | orchestrator | Thursday 19 February 2026 06:02:27 +0000 (0:00:00.897) 0:19:13.507 ***** 2026-02-19 06:02:57.049599 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049609 | orchestrator | 2026-02-19 06:02:57.049619 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:02:57.049630 | orchestrator | Thursday 19 February 2026 06:02:28 +0000 (0:00:01.102) 0:19:14.611 ***** 2026-02-19 06:02:57.049640 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049651 | orchestrator | 2026-02-19 06:02:57.049666 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:02:57.049681 | orchestrator | Thursday 19 February 2026 06:02:29 +0000 (0:00:01.072) 0:19:15.683 ***** 2026-02-19 06:02:57.049700 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049720 | orchestrator | 2026-02-19 06:02:57.049734 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:02:57.049748 | orchestrator | Thursday 19 February 2026 06:02:30 +0000 (0:00:01.120) 0:19:16.804 ***** 2026-02-19 06:02:57.049762 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049775 | orchestrator | 2026-02-19 06:02:57.049789 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:02:57.049804 | orchestrator | Thursday 19 February 2026 06:02:31 +0000 (0:00:01.085) 0:19:17.889 ***** 2026-02-19 06:02:57.049831 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049845 | orchestrator | 2026-02-19 06:02:57.049861 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:02:57.049870 | orchestrator | Thursday 19 February 2026 06:02:32 +0000 (0:00:00.985) 0:19:18.875 ***** 2026-02-19 06:02:57.049879 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049887 | orchestrator | 2026-02-19 06:02:57.049896 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:02:57.049906 | orchestrator | Thursday 19 February 2026 06:02:33 +0000 (0:00:00.896) 0:19:19.772 ***** 2026-02-19 06:02:57.049914 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049923 | orchestrator | 2026-02-19 06:02:57.049931 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:02:57.049940 | orchestrator | Thursday 19 February 2026 06:02:34 +0000 (0:00:00.908) 0:19:20.680 ***** 2026-02-19 06:02:57.049948 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.049957 | orchestrator | 2026-02-19 06:02:57.049984 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:02:57.049993 | orchestrator | Thursday 19 February 2026 06:02:35 +0000 (0:00:00.918) 0:19:21.598 ***** 2026-02-19 06:02:57.050002 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050011 | orchestrator | 2026-02-19 06:02:57.050084 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:02:57.050093 | orchestrator | Thursday 19 February 2026 06:02:36 +0000 (0:00:00.897) 0:19:22.496 ***** 2026-02-19 06:02:57.050102 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050111 | orchestrator | 2026-02-19 06:02:57.050120 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:02:57.050128 | orchestrator | Thursday 19 February 2026 06:02:37 +0000 (0:00:00.906) 0:19:23.403 ***** 2026-02-19 06:02:57.050137 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050146 | orchestrator | 2026-02-19 06:02:57.050178 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:02:57.050189 | orchestrator | Thursday 19 February 2026 06:02:38 +0000 (0:00:01.093) 0:19:24.496 ***** 2026-02-19 06:02:57.050198 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050206 | orchestrator | 2026-02-19 06:02:57.050215 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:02:57.050224 | orchestrator | Thursday 19 February 2026 06:02:39 +0000 (0:00:01.084) 0:19:25.581 ***** 2026-02-19 06:02:57.050233 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050241 | orchestrator | 2026-02-19 06:02:57.050250 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:02:57.050259 | orchestrator | Thursday 19 February 2026 06:02:40 +0000 (0:00:01.086) 0:19:26.667 ***** 2026-02-19 06:02:57.050267 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050276 | orchestrator | 2026-02-19 06:02:57.050285 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:02:57.050293 | orchestrator | Thursday 19 February 2026 06:02:41 +0000 (0:00:01.154) 0:19:27.821 ***** 2026-02-19 06:02:57.050302 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050311 | orchestrator | 2026-02-19 06:02:57.050319 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:02:57.050328 | orchestrator | Thursday 19 February 2026 06:02:42 +0000 (0:00:01.173) 0:19:28.994 ***** 2026-02-19 06:02:57.050337 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050346 | orchestrator | 2026-02-19 06:02:57.050354 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:02:57.050363 | orchestrator | Thursday 19 February 2026 06:02:43 +0000 (0:00:01.109) 0:19:30.104 ***** 2026-02-19 06:02:57.050372 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050381 | orchestrator | 2026-02-19 06:02:57.050390 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:02:57.050398 | orchestrator | Thursday 19 February 2026 06:02:44 +0000 (0:00:01.103) 0:19:31.208 ***** 2026-02-19 06:02:57.050414 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050423 | orchestrator | 2026-02-19 06:02:57.050432 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:02:57.050441 | orchestrator | Thursday 19 February 2026 06:02:46 +0000 (0:00:01.087) 0:19:32.296 ***** 2026-02-19 06:02:57.050449 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050458 | orchestrator | 2026-02-19 06:02:57.050467 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:02:57.050476 | orchestrator | Thursday 19 February 2026 06:02:47 +0000 (0:00:01.076) 0:19:33.373 ***** 2026-02-19 06:02:57.050484 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050493 | orchestrator | 2026-02-19 06:02:57.050502 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:02:57.050510 | orchestrator | Thursday 19 February 2026 06:02:48 +0000 (0:00:01.096) 0:19:34.469 ***** 2026-02-19 06:02:57.050519 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050528 | orchestrator | 2026-02-19 06:02:57.050537 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:02:57.050546 | orchestrator | Thursday 19 February 2026 06:02:49 +0000 (0:00:01.067) 0:19:35.537 ***** 2026-02-19 06:02:57.050554 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050563 | orchestrator | 2026-02-19 06:02:57.050572 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:02:57.050580 | orchestrator | Thursday 19 February 2026 06:02:50 +0000 (0:00:01.093) 0:19:36.630 ***** 2026-02-19 06:02:57.050589 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050598 | orchestrator | 2026-02-19 06:02:57.050606 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:02:57.050615 | orchestrator | Thursday 19 February 2026 06:02:51 +0000 (0:00:01.101) 0:19:37.732 ***** 2026-02-19 06:02:57.050624 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050633 | orchestrator | 2026-02-19 06:02:57.050641 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:02:57.050650 | orchestrator | Thursday 19 February 2026 06:02:52 +0000 (0:00:01.085) 0:19:38.818 ***** 2026-02-19 06:02:57.050658 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050667 | orchestrator | 2026-02-19 06:02:57.050676 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:02:57.050684 | orchestrator | Thursday 19 February 2026 06:02:53 +0000 (0:00:01.090) 0:19:39.909 ***** 2026-02-19 06:02:57.050693 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050702 | orchestrator | 2026-02-19 06:02:57.050711 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:02:57.050721 | orchestrator | Thursday 19 February 2026 06:02:54 +0000 (0:00:01.103) 0:19:41.012 ***** 2026-02-19 06:02:57.050729 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050738 | orchestrator | 2026-02-19 06:02:57.050747 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:02:57.050755 | orchestrator | Thursday 19 February 2026 06:02:55 +0000 (0:00:01.135) 0:19:42.148 ***** 2026-02-19 06:02:57.050764 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:02:57.050773 | orchestrator | 2026-02-19 06:02:57.050789 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:03:35.332854 | orchestrator | Thursday 19 February 2026 06:02:57 +0000 (0:00:01.112) 0:19:43.260 ***** 2026-02-19 06:03:35.332970 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.332986 | orchestrator | 2026-02-19 06:03:35.333014 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:03:35.333026 | orchestrator | Thursday 19 February 2026 06:02:58 +0000 (0:00:01.134) 0:19:44.395 ***** 2026-02-19 06:03:35.333036 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333046 | orchestrator | 2026-02-19 06:03:35.333056 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:03:35.333089 | orchestrator | Thursday 19 February 2026 06:02:59 +0000 (0:00:01.141) 0:19:45.536 ***** 2026-02-19 06:03:35.333099 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333109 | orchestrator | 2026-02-19 06:03:35.333118 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:03:35.333128 | orchestrator | Thursday 19 February 2026 06:03:00 +0000 (0:00:01.147) 0:19:46.684 ***** 2026-02-19 06:03:35.333138 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333147 | orchestrator | 2026-02-19 06:03:35.333157 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:03:35.333171 | orchestrator | Thursday 19 February 2026 06:03:01 +0000 (0:00:01.126) 0:19:47.811 ***** 2026-02-19 06:03:35.333238 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333258 | orchestrator | 2026-02-19 06:03:35.333275 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:03:35.333290 | orchestrator | Thursday 19 February 2026 06:03:02 +0000 (0:00:01.232) 0:19:49.043 ***** 2026-02-19 06:03:35.333305 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333321 | orchestrator | 2026-02-19 06:03:35.333335 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:03:35.333351 | orchestrator | Thursday 19 February 2026 06:03:03 +0000 (0:00:01.100) 0:19:50.144 ***** 2026-02-19 06:03:35.333368 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333385 | orchestrator | 2026-02-19 06:03:35.333402 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:03:35.333418 | orchestrator | Thursday 19 February 2026 06:03:05 +0000 (0:00:01.216) 0:19:51.361 ***** 2026-02-19 06:03:35.333434 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333452 | orchestrator | 2026-02-19 06:03:35.333470 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:03:35.333487 | orchestrator | Thursday 19 February 2026 06:03:06 +0000 (0:00:01.130) 0:19:52.492 ***** 2026-02-19 06:03:35.333503 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333516 | orchestrator | 2026-02-19 06:03:35.333528 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:03:35.333541 | orchestrator | Thursday 19 February 2026 06:03:07 +0000 (0:00:01.125) 0:19:53.617 ***** 2026-02-19 06:03:35.333552 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333563 | orchestrator | 2026-02-19 06:03:35.333574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:03:35.333585 | orchestrator | Thursday 19 February 2026 06:03:08 +0000 (0:00:01.114) 0:19:54.732 ***** 2026-02-19 06:03:35.333596 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333607 | orchestrator | 2026-02-19 06:03:35.333617 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:03:35.333629 | orchestrator | Thursday 19 February 2026 06:03:09 +0000 (0:00:01.096) 0:19:55.829 ***** 2026-02-19 06:03:35.333640 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333651 | orchestrator | 2026-02-19 06:03:35.333662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:03:35.333673 | orchestrator | Thursday 19 February 2026 06:03:10 +0000 (0:00:01.140) 0:19:56.969 ***** 2026-02-19 06:03:35.333685 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333696 | orchestrator | 2026-02-19 06:03:35.333707 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:03:35.333717 | orchestrator | Thursday 19 February 2026 06:03:11 +0000 (0:00:01.163) 0:19:58.132 ***** 2026-02-19 06:03:35.333726 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 06:03:35.333736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 06:03:35.333746 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 06:03:35.333755 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333781 | orchestrator | 2026-02-19 06:03:35.333797 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:03:35.333813 | orchestrator | Thursday 19 February 2026 06:03:13 +0000 (0:00:01.408) 0:19:59.541 ***** 2026-02-19 06:03:35.333830 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 06:03:35.333848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 06:03:35.333865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 06:03:35.333880 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333895 | orchestrator | 2026-02-19 06:03:35.333905 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:03:35.333914 | orchestrator | Thursday 19 February 2026 06:03:15 +0000 (0:00:01.746) 0:20:01.287 ***** 2026-02-19 06:03:35.333924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 06:03:35.333933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 06:03:35.333943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 06:03:35.333952 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.333962 | orchestrator | 2026-02-19 06:03:35.333971 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:03:35.333981 | orchestrator | Thursday 19 February 2026 06:03:16 +0000 (0:00:01.700) 0:20:02.988 ***** 2026-02-19 06:03:35.333990 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.334000 | orchestrator | 2026-02-19 06:03:35.334009 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:03:35.334106 | orchestrator | Thursday 19 February 2026 06:03:17 +0000 (0:00:01.101) 0:20:04.090 ***** 2026-02-19 06:03:35.334117 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-19 06:03:35.334127 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.334137 | orchestrator | 2026-02-19 06:03:35.334155 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:03:35.334165 | orchestrator | Thursday 19 February 2026 06:03:19 +0000 (0:00:01.263) 0:20:05.353 ***** 2026-02-19 06:03:35.334175 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.334184 | orchestrator | 2026-02-19 06:03:35.334220 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:03:35.334231 | orchestrator | Thursday 19 February 2026 06:03:20 +0000 (0:00:01.114) 0:20:06.468 ***** 2026-02-19 06:03:35.334240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 06:03:35.334250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 06:03:35.334259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 06:03:35.334269 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.334279 | orchestrator | 2026-02-19 06:03:35.334289 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-19 06:03:35.334298 | orchestrator | Thursday 19 February 2026 06:03:21 +0000 (0:00:01.398) 0:20:07.866 ***** 2026-02-19 06:03:35.334308 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.334318 | orchestrator | 2026-02-19 06:03:35.334328 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-19 06:03:35.334337 | orchestrator | Thursday 19 February 2026 06:03:22 +0000 (0:00:01.095) 0:20:08.961 ***** 2026-02-19 06:03:35.334347 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.334357 | orchestrator | 2026-02-19 06:03:35.334367 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-19 06:03:35.334376 | orchestrator | Thursday 19 February 2026 06:03:23 +0000 (0:00:01.107) 0:20:10.069 ***** 2026-02-19 06:03:35.334386 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.334396 | orchestrator | 2026-02-19 06:03:35.334406 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-19 06:03:35.334415 | orchestrator | Thursday 19 February 2026 06:03:24 +0000 (0:00:01.106) 0:20:11.175 ***** 2026-02-19 06:03:35.334425 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:03:35.334435 | orchestrator | 2026-02-19 06:03:35.334453 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-19 06:03:35.334463 | orchestrator | 2026-02-19 06:03:35.334473 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-19 06:03:35.334483 | orchestrator | Thursday 19 February 2026 06:03:25 +0000 (0:00:00.994) 0:20:12.170 ***** 2026-02-19 06:03:35.334492 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334502 | orchestrator | 2026-02-19 06:03:35.334512 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:03:35.334521 | orchestrator | Thursday 19 February 2026 06:03:26 +0000 (0:00:00.802) 0:20:12.973 ***** 2026-02-19 06:03:35.334531 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334541 | orchestrator | 2026-02-19 06:03:35.334550 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:03:35.334560 | orchestrator | Thursday 19 February 2026 06:03:27 +0000 (0:00:00.814) 0:20:13.788 ***** 2026-02-19 06:03:35.334569 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334579 | orchestrator | 2026-02-19 06:03:35.334589 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:03:35.334599 | orchestrator | Thursday 19 February 2026 06:03:28 +0000 (0:00:00.764) 0:20:14.553 ***** 2026-02-19 06:03:35.334608 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334618 | orchestrator | 2026-02-19 06:03:35.334628 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:03:35.334637 | orchestrator | Thursday 19 February 2026 06:03:29 +0000 (0:00:00.812) 0:20:15.365 ***** 2026-02-19 06:03:35.334647 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334656 | orchestrator | 2026-02-19 06:03:35.334666 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:03:35.334676 | orchestrator | Thursday 19 February 2026 06:03:29 +0000 (0:00:00.757) 0:20:16.123 ***** 2026-02-19 06:03:35.334685 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334695 | orchestrator | 2026-02-19 06:03:35.334705 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:03:35.334715 | orchestrator | Thursday 19 February 2026 06:03:30 +0000 (0:00:00.801) 0:20:16.924 ***** 2026-02-19 06:03:35.334724 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334734 | orchestrator | 2026-02-19 06:03:35.334744 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:03:35.334754 | orchestrator | Thursday 19 February 2026 06:03:31 +0000 (0:00:00.774) 0:20:17.699 ***** 2026-02-19 06:03:35.334763 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334773 | orchestrator | 2026-02-19 06:03:35.334783 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:03:35.334792 | orchestrator | Thursday 19 February 2026 06:03:32 +0000 (0:00:00.765) 0:20:18.464 ***** 2026-02-19 06:03:35.334802 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334815 | orchestrator | 2026-02-19 06:03:35.334832 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:03:35.334849 | orchestrator | Thursday 19 February 2026 06:03:33 +0000 (0:00:00.788) 0:20:19.253 ***** 2026-02-19 06:03:35.334866 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334883 | orchestrator | 2026-02-19 06:03:35.334900 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:03:35.334916 | orchestrator | Thursday 19 February 2026 06:03:33 +0000 (0:00:00.758) 0:20:20.012 ***** 2026-02-19 06:03:35.334933 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334943 | orchestrator | 2026-02-19 06:03:35.334953 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:03:35.334963 | orchestrator | Thursday 19 February 2026 06:03:34 +0000 (0:00:00.753) 0:20:20.765 ***** 2026-02-19 06:03:35.334973 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:03:35.334982 | orchestrator | 2026-02-19 06:03:35.334999 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:04:06.337696 | orchestrator | Thursday 19 February 2026 06:03:35 +0000 (0:00:00.780) 0:20:21.546 ***** 2026-02-19 06:04:06.337815 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.337829 | orchestrator | 2026-02-19 06:04:06.337851 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:04:06.337860 | orchestrator | Thursday 19 February 2026 06:03:36 +0000 (0:00:00.757) 0:20:22.304 ***** 2026-02-19 06:04:06.337868 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.337876 | orchestrator | 2026-02-19 06:04:06.337885 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:04:06.337893 | orchestrator | Thursday 19 February 2026 06:03:36 +0000 (0:00:00.759) 0:20:23.063 ***** 2026-02-19 06:04:06.337900 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.337908 | orchestrator | 2026-02-19 06:04:06.337916 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:04:06.337924 | orchestrator | Thursday 19 February 2026 06:03:37 +0000 (0:00:00.761) 0:20:23.826 ***** 2026-02-19 06:04:06.337932 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.337940 | orchestrator | 2026-02-19 06:04:06.337948 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:04:06.337956 | orchestrator | Thursday 19 February 2026 06:03:38 +0000 (0:00:00.763) 0:20:24.589 ***** 2026-02-19 06:04:06.337964 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.337972 | orchestrator | 2026-02-19 06:04:06.337979 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:04:06.337987 | orchestrator | Thursday 19 February 2026 06:03:39 +0000 (0:00:00.765) 0:20:25.355 ***** 2026-02-19 06:04:06.337995 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338002 | orchestrator | 2026-02-19 06:04:06.338010 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:04:06.338068 | orchestrator | Thursday 19 February 2026 06:03:39 +0000 (0:00:00.758) 0:20:26.114 ***** 2026-02-19 06:04:06.338077 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338084 | orchestrator | 2026-02-19 06:04:06.338092 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:04:06.338101 | orchestrator | Thursday 19 February 2026 06:03:40 +0000 (0:00:00.756) 0:20:26.871 ***** 2026-02-19 06:04:06.338109 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338117 | orchestrator | 2026-02-19 06:04:06.338124 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:04:06.338132 | orchestrator | Thursday 19 February 2026 06:03:41 +0000 (0:00:00.775) 0:20:27.646 ***** 2026-02-19 06:04:06.338140 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338148 | orchestrator | 2026-02-19 06:04:06.338156 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:04:06.338163 | orchestrator | Thursday 19 February 2026 06:03:42 +0000 (0:00:00.757) 0:20:28.404 ***** 2026-02-19 06:04:06.338171 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338179 | orchestrator | 2026-02-19 06:04:06.338187 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:04:06.338194 | orchestrator | Thursday 19 February 2026 06:03:42 +0000 (0:00:00.802) 0:20:29.206 ***** 2026-02-19 06:04:06.338202 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338229 | orchestrator | 2026-02-19 06:04:06.338237 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:04:06.338245 | orchestrator | Thursday 19 February 2026 06:03:43 +0000 (0:00:00.759) 0:20:29.966 ***** 2026-02-19 06:04:06.338255 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338264 | orchestrator | 2026-02-19 06:04:06.338273 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:04:06.338282 | orchestrator | Thursday 19 February 2026 06:03:44 +0000 (0:00:00.765) 0:20:30.731 ***** 2026-02-19 06:04:06.338292 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338301 | orchestrator | 2026-02-19 06:04:06.338311 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:04:06.338328 | orchestrator | Thursday 19 February 2026 06:03:45 +0000 (0:00:00.799) 0:20:31.530 ***** 2026-02-19 06:04:06.338337 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338346 | orchestrator | 2026-02-19 06:04:06.338355 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:04:06.338365 | orchestrator | Thursday 19 February 2026 06:03:46 +0000 (0:00:00.809) 0:20:32.340 ***** 2026-02-19 06:04:06.338374 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338383 | orchestrator | 2026-02-19 06:04:06.338392 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:04:06.338401 | orchestrator | Thursday 19 February 2026 06:03:46 +0000 (0:00:00.757) 0:20:33.097 ***** 2026-02-19 06:04:06.338410 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338419 | orchestrator | 2026-02-19 06:04:06.338428 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:04:06.338437 | orchestrator | Thursday 19 February 2026 06:03:47 +0000 (0:00:00.790) 0:20:33.887 ***** 2026-02-19 06:04:06.338446 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338455 | orchestrator | 2026-02-19 06:04:06.338465 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:04:06.338474 | orchestrator | Thursday 19 February 2026 06:03:48 +0000 (0:00:00.756) 0:20:34.644 ***** 2026-02-19 06:04:06.338483 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338491 | orchestrator | 2026-02-19 06:04:06.338501 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:04:06.338510 | orchestrator | Thursday 19 February 2026 06:03:49 +0000 (0:00:00.768) 0:20:35.413 ***** 2026-02-19 06:04:06.338519 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338528 | orchestrator | 2026-02-19 06:04:06.338538 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:04:06.338547 | orchestrator | Thursday 19 February 2026 06:03:49 +0000 (0:00:00.783) 0:20:36.196 ***** 2026-02-19 06:04:06.338556 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338565 | orchestrator | 2026-02-19 06:04:06.338575 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:04:06.338598 | orchestrator | Thursday 19 February 2026 06:03:50 +0000 (0:00:00.784) 0:20:36.981 ***** 2026-02-19 06:04:06.338608 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338616 | orchestrator | 2026-02-19 06:04:06.338628 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:04:06.338636 | orchestrator | Thursday 19 February 2026 06:03:51 +0000 (0:00:00.815) 0:20:37.796 ***** 2026-02-19 06:04:06.338644 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338652 | orchestrator | 2026-02-19 06:04:06.338660 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:04:06.338668 | orchestrator | Thursday 19 February 2026 06:03:52 +0000 (0:00:00.775) 0:20:38.572 ***** 2026-02-19 06:04:06.338675 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338683 | orchestrator | 2026-02-19 06:04:06.338691 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:04:06.338699 | orchestrator | Thursday 19 February 2026 06:03:53 +0000 (0:00:00.762) 0:20:39.335 ***** 2026-02-19 06:04:06.338707 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338714 | orchestrator | 2026-02-19 06:04:06.338722 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:04:06.338730 | orchestrator | Thursday 19 February 2026 06:03:53 +0000 (0:00:00.737) 0:20:40.073 ***** 2026-02-19 06:04:06.338738 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338746 | orchestrator | 2026-02-19 06:04:06.338754 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:04:06.338761 | orchestrator | Thursday 19 February 2026 06:03:54 +0000 (0:00:00.782) 0:20:40.856 ***** 2026-02-19 06:04:06.338769 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338777 | orchestrator | 2026-02-19 06:04:06.338785 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:04:06.338798 | orchestrator | Thursday 19 February 2026 06:03:55 +0000 (0:00:00.752) 0:20:41.608 ***** 2026-02-19 06:04:06.338806 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338814 | orchestrator | 2026-02-19 06:04:06.338822 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:04:06.338831 | orchestrator | Thursday 19 February 2026 06:03:56 +0000 (0:00:00.799) 0:20:42.407 ***** 2026-02-19 06:04:06.338839 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338847 | orchestrator | 2026-02-19 06:04:06.338854 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:04:06.338863 | orchestrator | Thursday 19 February 2026 06:03:56 +0000 (0:00:00.780) 0:20:43.188 ***** 2026-02-19 06:04:06.338871 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338878 | orchestrator | 2026-02-19 06:04:06.338886 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:04:06.338895 | orchestrator | Thursday 19 February 2026 06:03:57 +0000 (0:00:00.763) 0:20:43.951 ***** 2026-02-19 06:04:06.338902 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338910 | orchestrator | 2026-02-19 06:04:06.338918 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:04:06.338926 | orchestrator | Thursday 19 February 2026 06:03:58 +0000 (0:00:00.757) 0:20:44.709 ***** 2026-02-19 06:04:06.338934 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338942 | orchestrator | 2026-02-19 06:04:06.338950 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:04:06.338957 | orchestrator | Thursday 19 February 2026 06:03:59 +0000 (0:00:00.800) 0:20:45.510 ***** 2026-02-19 06:04:06.338965 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.338973 | orchestrator | 2026-02-19 06:04:06.338981 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:04:06.338989 | orchestrator | Thursday 19 February 2026 06:04:00 +0000 (0:00:00.756) 0:20:46.266 ***** 2026-02-19 06:04:06.338997 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.339004 | orchestrator | 2026-02-19 06:04:06.339012 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:04:06.339020 | orchestrator | Thursday 19 February 2026 06:04:00 +0000 (0:00:00.751) 0:20:47.018 ***** 2026-02-19 06:04:06.339028 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.339036 | orchestrator | 2026-02-19 06:04:06.339044 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:04:06.339052 | orchestrator | Thursday 19 February 2026 06:04:01 +0000 (0:00:00.846) 0:20:47.865 ***** 2026-02-19 06:04:06.339059 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.339067 | orchestrator | 2026-02-19 06:04:06.339075 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:04:06.339083 | orchestrator | Thursday 19 February 2026 06:04:02 +0000 (0:00:00.757) 0:20:48.623 ***** 2026-02-19 06:04:06.339091 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.339098 | orchestrator | 2026-02-19 06:04:06.339106 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:04:06.339114 | orchestrator | Thursday 19 February 2026 06:04:03 +0000 (0:00:00.855) 0:20:49.479 ***** 2026-02-19 06:04:06.339122 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.339130 | orchestrator | 2026-02-19 06:04:06.339138 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:04:06.339145 | orchestrator | Thursday 19 February 2026 06:04:04 +0000 (0:00:00.780) 0:20:50.259 ***** 2026-02-19 06:04:06.339153 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.339161 | orchestrator | 2026-02-19 06:04:06.339169 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:04:06.339178 | orchestrator | Thursday 19 February 2026 06:04:04 +0000 (0:00:00.750) 0:20:51.010 ***** 2026-02-19 06:04:06.339191 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.339199 | orchestrator | 2026-02-19 06:04:06.339207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:04:06.339230 | orchestrator | Thursday 19 February 2026 06:04:05 +0000 (0:00:00.760) 0:20:51.770 ***** 2026-02-19 06:04:06.339238 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:06.339246 | orchestrator | 2026-02-19 06:04:06.339258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:04:36.139175 | orchestrator | Thursday 19 February 2026 06:04:06 +0000 (0:00:00.778) 0:20:52.549 ***** 2026-02-19 06:04:36.139417 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139441 | orchestrator | 2026-02-19 06:04:36.139471 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:04:36.139484 | orchestrator | Thursday 19 February 2026 06:04:07 +0000 (0:00:00.759) 0:20:53.308 ***** 2026-02-19 06:04:36.139495 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139506 | orchestrator | 2026-02-19 06:04:36.139518 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:04:36.139529 | orchestrator | Thursday 19 February 2026 06:04:07 +0000 (0:00:00.772) 0:20:54.081 ***** 2026-02-19 06:04:36.139540 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 06:04:36.139550 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 06:04:36.139561 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 06:04:36.139572 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139583 | orchestrator | 2026-02-19 06:04:36.139594 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:04:36.139605 | orchestrator | Thursday 19 February 2026 06:04:08 +0000 (0:00:01.047) 0:20:55.128 ***** 2026-02-19 06:04:36.139616 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 06:04:36.139627 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 06:04:36.139638 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 06:04:36.139649 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139660 | orchestrator | 2026-02-19 06:04:36.139671 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:04:36.139682 | orchestrator | Thursday 19 February 2026 06:04:09 +0000 (0:00:01.008) 0:20:56.137 ***** 2026-02-19 06:04:36.139693 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 06:04:36.139704 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 06:04:36.139715 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 06:04:36.139726 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139737 | orchestrator | 2026-02-19 06:04:36.139748 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:04:36.139759 | orchestrator | Thursday 19 February 2026 06:04:10 +0000 (0:00:01.028) 0:20:57.166 ***** 2026-02-19 06:04:36.139770 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139781 | orchestrator | 2026-02-19 06:04:36.139791 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:04:36.139803 | orchestrator | Thursday 19 February 2026 06:04:11 +0000 (0:00:00.795) 0:20:57.961 ***** 2026-02-19 06:04:36.139814 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-19 06:04:36.139825 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139836 | orchestrator | 2026-02-19 06:04:36.139847 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:04:36.139858 | orchestrator | Thursday 19 February 2026 06:04:12 +0000 (0:00:00.891) 0:20:58.852 ***** 2026-02-19 06:04:36.139869 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139880 | orchestrator | 2026-02-19 06:04:36.139891 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:04:36.139902 | orchestrator | Thursday 19 February 2026 06:04:13 +0000 (0:00:00.777) 0:20:59.630 ***** 2026-02-19 06:04:36.139936 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 06:04:36.139948 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 06:04:36.139959 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 06:04:36.139970 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.139981 | orchestrator | 2026-02-19 06:04:36.139991 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-19 06:04:36.140002 | orchestrator | Thursday 19 February 2026 06:04:14 +0000 (0:00:01.339) 0:21:00.969 ***** 2026-02-19 06:04:36.140013 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.140024 | orchestrator | 2026-02-19 06:04:36.140035 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-19 06:04:36.140046 | orchestrator | Thursday 19 February 2026 06:04:15 +0000 (0:00:00.811) 0:21:01.780 ***** 2026-02-19 06:04:36.140056 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.140067 | orchestrator | 2026-02-19 06:04:36.140078 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-19 06:04:36.140089 | orchestrator | Thursday 19 February 2026 06:04:16 +0000 (0:00:00.859) 0:21:02.640 ***** 2026-02-19 06:04:36.140099 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.140110 | orchestrator | 2026-02-19 06:04:36.140121 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-19 06:04:36.140132 | orchestrator | Thursday 19 February 2026 06:04:17 +0000 (0:00:00.758) 0:21:03.398 ***** 2026-02-19 06:04:36.140142 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:04:36.140153 | orchestrator | 2026-02-19 06:04:36.140164 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-19 06:04:36.140175 | orchestrator | 2026-02-19 06:04:36.140201 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-19 06:04:36.140212 | orchestrator | Thursday 19 February 2026 06:04:18 +0000 (0:00:01.002) 0:21:04.401 ***** 2026-02-19 06:04:36.140223 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140280 | orchestrator | 2026-02-19 06:04:36.140291 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:04:36.140302 | orchestrator | Thursday 19 February 2026 06:04:19 +0000 (0:00:00.842) 0:21:05.243 ***** 2026-02-19 06:04:36.140313 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140324 | orchestrator | 2026-02-19 06:04:36.140335 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:04:36.140346 | orchestrator | Thursday 19 February 2026 06:04:19 +0000 (0:00:00.809) 0:21:06.053 ***** 2026-02-19 06:04:36.140357 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140368 | orchestrator | 2026-02-19 06:04:36.140398 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:04:36.140410 | orchestrator | Thursday 19 February 2026 06:04:20 +0000 (0:00:00.762) 0:21:06.816 ***** 2026-02-19 06:04:36.140427 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140438 | orchestrator | 2026-02-19 06:04:36.140449 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:04:36.140460 | orchestrator | Thursday 19 February 2026 06:04:21 +0000 (0:00:00.779) 0:21:07.596 ***** 2026-02-19 06:04:36.140470 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140481 | orchestrator | 2026-02-19 06:04:36.140492 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:04:36.140502 | orchestrator | Thursday 19 February 2026 06:04:22 +0000 (0:00:00.751) 0:21:08.348 ***** 2026-02-19 06:04:36.140513 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140524 | orchestrator | 2026-02-19 06:04:36.140534 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:04:36.140545 | orchestrator | Thursday 19 February 2026 06:04:22 +0000 (0:00:00.801) 0:21:09.150 ***** 2026-02-19 06:04:36.140555 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140566 | orchestrator | 2026-02-19 06:04:36.140576 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:04:36.140596 | orchestrator | Thursday 19 February 2026 06:04:23 +0000 (0:00:00.771) 0:21:09.922 ***** 2026-02-19 06:04:36.140607 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140618 | orchestrator | 2026-02-19 06:04:36.140629 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:04:36.140639 | orchestrator | Thursday 19 February 2026 06:04:24 +0000 (0:00:00.755) 0:21:10.677 ***** 2026-02-19 06:04:36.140650 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140661 | orchestrator | 2026-02-19 06:04:36.140671 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:04:36.140682 | orchestrator | Thursday 19 February 2026 06:04:25 +0000 (0:00:00.768) 0:21:11.446 ***** 2026-02-19 06:04:36.140693 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140703 | orchestrator | 2026-02-19 06:04:36.140714 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:04:36.140725 | orchestrator | Thursday 19 February 2026 06:04:25 +0000 (0:00:00.754) 0:21:12.201 ***** 2026-02-19 06:04:36.140735 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140746 | orchestrator | 2026-02-19 06:04:36.140757 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:04:36.140767 | orchestrator | Thursday 19 February 2026 06:04:26 +0000 (0:00:00.823) 0:21:13.024 ***** 2026-02-19 06:04:36.140778 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140797 | orchestrator | 2026-02-19 06:04:36.140815 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:04:36.140842 | orchestrator | Thursday 19 February 2026 06:04:27 +0000 (0:00:00.781) 0:21:13.806 ***** 2026-02-19 06:04:36.140863 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140881 | orchestrator | 2026-02-19 06:04:36.140898 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:04:36.140915 | orchestrator | Thursday 19 February 2026 06:04:28 +0000 (0:00:00.792) 0:21:14.599 ***** 2026-02-19 06:04:36.140931 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.140949 | orchestrator | 2026-02-19 06:04:36.140968 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:04:36.140985 | orchestrator | Thursday 19 February 2026 06:04:29 +0000 (0:00:00.747) 0:21:15.346 ***** 2026-02-19 06:04:36.141004 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141023 | orchestrator | 2026-02-19 06:04:36.141042 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:04:36.141061 | orchestrator | Thursday 19 February 2026 06:04:29 +0000 (0:00:00.763) 0:21:16.110 ***** 2026-02-19 06:04:36.141073 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141084 | orchestrator | 2026-02-19 06:04:36.141095 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:04:36.141105 | orchestrator | Thursday 19 February 2026 06:04:30 +0000 (0:00:00.784) 0:21:16.894 ***** 2026-02-19 06:04:36.141116 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141127 | orchestrator | 2026-02-19 06:04:36.141138 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:04:36.141148 | orchestrator | Thursday 19 February 2026 06:04:31 +0000 (0:00:00.794) 0:21:17.688 ***** 2026-02-19 06:04:36.141159 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141170 | orchestrator | 2026-02-19 06:04:36.141180 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:04:36.141191 | orchestrator | Thursday 19 February 2026 06:04:32 +0000 (0:00:00.809) 0:21:18.498 ***** 2026-02-19 06:04:36.141202 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141213 | orchestrator | 2026-02-19 06:04:36.141223 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:04:36.141265 | orchestrator | Thursday 19 February 2026 06:04:33 +0000 (0:00:00.767) 0:21:19.265 ***** 2026-02-19 06:04:36.141277 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141287 | orchestrator | 2026-02-19 06:04:36.141299 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:04:36.141320 | orchestrator | Thursday 19 February 2026 06:04:33 +0000 (0:00:00.749) 0:21:20.015 ***** 2026-02-19 06:04:36.141331 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141342 | orchestrator | 2026-02-19 06:04:36.141352 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:04:36.141363 | orchestrator | Thursday 19 February 2026 06:04:34 +0000 (0:00:00.759) 0:21:20.775 ***** 2026-02-19 06:04:36.141374 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141384 | orchestrator | 2026-02-19 06:04:36.141395 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:04:36.141406 | orchestrator | Thursday 19 February 2026 06:04:35 +0000 (0:00:00.809) 0:21:21.585 ***** 2026-02-19 06:04:36.141417 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:04:36.141427 | orchestrator | 2026-02-19 06:04:36.141438 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:04:36.141459 | orchestrator | Thursday 19 February 2026 06:04:36 +0000 (0:00:00.769) 0:21:22.354 ***** 2026-02-19 06:05:06.347885 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348028 | orchestrator | 2026-02-19 06:05:06.348071 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:05:06.348092 | orchestrator | Thursday 19 February 2026 06:04:36 +0000 (0:00:00.743) 0:21:23.098 ***** 2026-02-19 06:05:06.348109 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348126 | orchestrator | 2026-02-19 06:05:06.348143 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:05:06.348160 | orchestrator | Thursday 19 February 2026 06:04:37 +0000 (0:00:00.750) 0:21:23.848 ***** 2026-02-19 06:05:06.348175 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348192 | orchestrator | 2026-02-19 06:05:06.348209 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:05:06.348227 | orchestrator | Thursday 19 February 2026 06:04:38 +0000 (0:00:00.758) 0:21:24.607 ***** 2026-02-19 06:05:06.348273 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348293 | orchestrator | 2026-02-19 06:05:06.348311 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:05:06.348328 | orchestrator | Thursday 19 February 2026 06:04:39 +0000 (0:00:00.756) 0:21:25.364 ***** 2026-02-19 06:05:06.348346 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348364 | orchestrator | 2026-02-19 06:05:06.348381 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:05:06.348399 | orchestrator | Thursday 19 February 2026 06:04:39 +0000 (0:00:00.742) 0:21:26.107 ***** 2026-02-19 06:05:06.348419 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348440 | orchestrator | 2026-02-19 06:05:06.348458 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:05:06.348478 | orchestrator | Thursday 19 February 2026 06:04:40 +0000 (0:00:00.742) 0:21:26.850 ***** 2026-02-19 06:05:06.348498 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348518 | orchestrator | 2026-02-19 06:05:06.348537 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:05:06.348558 | orchestrator | Thursday 19 February 2026 06:04:41 +0000 (0:00:00.755) 0:21:27.606 ***** 2026-02-19 06:05:06.348577 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348597 | orchestrator | 2026-02-19 06:05:06.348616 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:05:06.348639 | orchestrator | Thursday 19 February 2026 06:04:42 +0000 (0:00:00.757) 0:21:28.363 ***** 2026-02-19 06:05:06.348659 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348680 | orchestrator | 2026-02-19 06:05:06.348699 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:05:06.348719 | orchestrator | Thursday 19 February 2026 06:04:42 +0000 (0:00:00.796) 0:21:29.160 ***** 2026-02-19 06:05:06.348740 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348760 | orchestrator | 2026-02-19 06:05:06.348778 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:05:06.348827 | orchestrator | Thursday 19 February 2026 06:04:43 +0000 (0:00:00.793) 0:21:29.953 ***** 2026-02-19 06:05:06.348846 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348864 | orchestrator | 2026-02-19 06:05:06.348881 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:05:06.348897 | orchestrator | Thursday 19 February 2026 06:04:44 +0000 (0:00:00.795) 0:21:30.749 ***** 2026-02-19 06:05:06.348914 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348930 | orchestrator | 2026-02-19 06:05:06.348946 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:05:06.348962 | orchestrator | Thursday 19 February 2026 06:04:45 +0000 (0:00:00.776) 0:21:31.526 ***** 2026-02-19 06:05:06.348978 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.348992 | orchestrator | 2026-02-19 06:05:06.349008 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:05:06.349024 | orchestrator | Thursday 19 February 2026 06:04:46 +0000 (0:00:00.756) 0:21:32.282 ***** 2026-02-19 06:05:06.349040 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349056 | orchestrator | 2026-02-19 06:05:06.349072 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:05:06.349086 | orchestrator | Thursday 19 February 2026 06:04:46 +0000 (0:00:00.756) 0:21:33.039 ***** 2026-02-19 06:05:06.349101 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349116 | orchestrator | 2026-02-19 06:05:06.349131 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:05:06.349146 | orchestrator | Thursday 19 February 2026 06:04:47 +0000 (0:00:00.753) 0:21:33.792 ***** 2026-02-19 06:05:06.349160 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349176 | orchestrator | 2026-02-19 06:05:06.349191 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:05:06.349207 | orchestrator | Thursday 19 February 2026 06:04:48 +0000 (0:00:00.757) 0:21:34.549 ***** 2026-02-19 06:05:06.349222 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349237 | orchestrator | 2026-02-19 06:05:06.349280 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:05:06.349296 | orchestrator | Thursday 19 February 2026 06:04:49 +0000 (0:00:00.765) 0:21:35.315 ***** 2026-02-19 06:05:06.349312 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349327 | orchestrator | 2026-02-19 06:05:06.349343 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:05:06.349358 | orchestrator | Thursday 19 February 2026 06:04:49 +0000 (0:00:00.777) 0:21:36.092 ***** 2026-02-19 06:05:06.349373 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349387 | orchestrator | 2026-02-19 06:05:06.349402 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:05:06.349418 | orchestrator | Thursday 19 February 2026 06:04:50 +0000 (0:00:00.781) 0:21:36.873 ***** 2026-02-19 06:05:06.349433 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349449 | orchestrator | 2026-02-19 06:05:06.349465 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:05:06.349540 | orchestrator | Thursday 19 February 2026 06:04:51 +0000 (0:00:00.766) 0:21:37.640 ***** 2026-02-19 06:05:06.349561 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349578 | orchestrator | 2026-02-19 06:05:06.349609 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:05:06.349628 | orchestrator | Thursday 19 February 2026 06:04:52 +0000 (0:00:00.772) 0:21:38.412 ***** 2026-02-19 06:05:06.349646 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349662 | orchestrator | 2026-02-19 06:05:06.349677 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:05:06.349692 | orchestrator | Thursday 19 February 2026 06:04:52 +0000 (0:00:00.749) 0:21:39.162 ***** 2026-02-19 06:05:06.349727 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349744 | orchestrator | 2026-02-19 06:05:06.349761 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:05:06.349777 | orchestrator | Thursday 19 February 2026 06:04:53 +0000 (0:00:00.859) 0:21:40.022 ***** 2026-02-19 06:05:06.349792 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349802 | orchestrator | 2026-02-19 06:05:06.349812 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:05:06.349821 | orchestrator | Thursday 19 February 2026 06:04:54 +0000 (0:00:00.765) 0:21:40.788 ***** 2026-02-19 06:05:06.349831 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349840 | orchestrator | 2026-02-19 06:05:06.349850 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:05:06.349859 | orchestrator | Thursday 19 February 2026 06:04:55 +0000 (0:00:00.898) 0:21:41.686 ***** 2026-02-19 06:05:06.349868 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349878 | orchestrator | 2026-02-19 06:05:06.349887 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:05:06.349896 | orchestrator | Thursday 19 February 2026 06:04:56 +0000 (0:00:00.771) 0:21:42.457 ***** 2026-02-19 06:05:06.349906 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349915 | orchestrator | 2026-02-19 06:05:06.349925 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:05:06.349936 | orchestrator | Thursday 19 February 2026 06:04:56 +0000 (0:00:00.765) 0:21:43.223 ***** 2026-02-19 06:05:06.349946 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349955 | orchestrator | 2026-02-19 06:05:06.349965 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:05:06.349974 | orchestrator | Thursday 19 February 2026 06:04:57 +0000 (0:00:00.772) 0:21:43.996 ***** 2026-02-19 06:05:06.349983 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.349993 | orchestrator | 2026-02-19 06:05:06.350002 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:05:06.350012 | orchestrator | Thursday 19 February 2026 06:04:58 +0000 (0:00:00.762) 0:21:44.758 ***** 2026-02-19 06:05:06.350089 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.350100 | orchestrator | 2026-02-19 06:05:06.350110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:05:06.350119 | orchestrator | Thursday 19 February 2026 06:04:59 +0000 (0:00:00.766) 0:21:45.524 ***** 2026-02-19 06:05:06.350129 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.350138 | orchestrator | 2026-02-19 06:05:06.350148 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:05:06.350158 | orchestrator | Thursday 19 February 2026 06:05:00 +0000 (0:00:00.764) 0:21:46.289 ***** 2026-02-19 06:05:06.350167 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:05:06.350177 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:05:06.350187 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:05:06.350197 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.350206 | orchestrator | 2026-02-19 06:05:06.350216 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:05:06.350226 | orchestrator | Thursday 19 February 2026 06:05:01 +0000 (0:00:01.093) 0:21:47.383 ***** 2026-02-19 06:05:06.350236 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:05:06.350268 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:05:06.350279 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:05:06.350289 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.350299 | orchestrator | 2026-02-19 06:05:06.350309 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:05:06.350318 | orchestrator | Thursday 19 February 2026 06:05:02 +0000 (0:00:01.343) 0:21:48.726 ***** 2026-02-19 06:05:06.350336 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:05:06.350346 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:05:06.350355 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:05:06.350365 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.350375 | orchestrator | 2026-02-19 06:05:06.350385 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:05:06.350394 | orchestrator | Thursday 19 February 2026 06:05:03 +0000 (0:00:01.321) 0:21:50.047 ***** 2026-02-19 06:05:06.350404 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.350414 | orchestrator | 2026-02-19 06:05:06.350423 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:05:06.350433 | orchestrator | Thursday 19 February 2026 06:05:04 +0000 (0:00:00.783) 0:21:50.831 ***** 2026-02-19 06:05:06.350442 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-19 06:05:06.350452 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.350462 | orchestrator | 2026-02-19 06:05:06.350471 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:05:06.350481 | orchestrator | Thursday 19 February 2026 06:05:05 +0000 (0:00:00.952) 0:21:51.783 ***** 2026-02-19 06:05:06.350490 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:06.350500 | orchestrator | 2026-02-19 06:05:06.350510 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:05:06.350532 | orchestrator | Thursday 19 February 2026 06:05:06 +0000 (0:00:00.776) 0:21:52.560 ***** 2026-02-19 06:05:39.605865 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 06:05:39.605972 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 06:05:39.605986 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 06:05:39.605997 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:39.606006 | orchestrator | 2026-02-19 06:05:39.606070 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-19 06:05:39.606084 | orchestrator | Thursday 19 February 2026 06:05:07 +0000 (0:00:01.023) 0:21:53.583 ***** 2026-02-19 06:05:39.606095 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:39.606103 | orchestrator | 2026-02-19 06:05:39.606114 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-19 06:05:39.606124 | orchestrator | Thursday 19 February 2026 06:05:08 +0000 (0:00:00.807) 0:21:54.391 ***** 2026-02-19 06:05:39.606134 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:39.606144 | orchestrator | 2026-02-19 06:05:39.606153 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-19 06:05:39.606163 | orchestrator | Thursday 19 February 2026 06:05:08 +0000 (0:00:00.774) 0:21:55.166 ***** 2026-02-19 06:05:39.606173 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:39.606183 | orchestrator | 2026-02-19 06:05:39.606192 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-19 06:05:39.606202 | orchestrator | Thursday 19 February 2026 06:05:09 +0000 (0:00:00.767) 0:21:55.933 ***** 2026-02-19 06:05:39.606211 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:05:39.606221 | orchestrator | 2026-02-19 06:05:39.606230 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-19 06:05:39.606240 | orchestrator | 2026-02-19 06:05:39.606250 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-19 06:05:39.606259 | orchestrator | Thursday 19 February 2026 06:05:11 +0000 (0:00:01.312) 0:21:57.246 ***** 2026-02-19 06:05:39.606319 | orchestrator | changed: [testbed-node-0] 2026-02-19 06:05:39.606329 | orchestrator | 2026-02-19 06:05:39.606339 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-19 06:05:39.606349 | orchestrator | Thursday 19 February 2026 06:05:14 +0000 (0:00:03.049) 0:22:00.296 ***** 2026-02-19 06:05:39.606358 | orchestrator | changed: [testbed-node-0] 2026-02-19 06:05:39.606368 | orchestrator | 2026-02-19 06:05:39.606377 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:05:39.606410 | orchestrator | Thursday 19 February 2026 06:05:16 +0000 (0:00:02.519) 0:22:02.815 ***** 2026-02-19 06:05:39.606420 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-19 06:05:39.606429 | orchestrator | 2026-02-19 06:05:39.606439 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:05:39.606449 | orchestrator | Thursday 19 February 2026 06:05:17 +0000 (0:00:01.260) 0:22:04.076 ***** 2026-02-19 06:05:39.606458 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:39.606469 | orchestrator | 2026-02-19 06:05:39.606478 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:05:39.606488 | orchestrator | Thursday 19 February 2026 06:05:19 +0000 (0:00:01.465) 0:22:05.542 ***** 2026-02-19 06:05:39.606499 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:39.606508 | orchestrator | 2026-02-19 06:05:39.606517 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:05:39.606527 | orchestrator | Thursday 19 February 2026 06:05:20 +0000 (0:00:01.143) 0:22:06.685 ***** 2026-02-19 06:05:39.606537 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:39.606546 | orchestrator | 2026-02-19 06:05:39.606555 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:05:39.606565 | orchestrator | Thursday 19 February 2026 06:05:21 +0000 (0:00:01.485) 0:22:08.171 ***** 2026-02-19 06:05:39.606574 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:39.606583 | orchestrator | 2026-02-19 06:05:39.606593 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:05:39.606604 | orchestrator | Thursday 19 February 2026 06:05:23 +0000 (0:00:01.127) 0:22:09.298 ***** 2026-02-19 06:05:39.606613 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:39.606623 | orchestrator | 2026-02-19 06:05:39.606633 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:05:39.606644 | orchestrator | Thursday 19 February 2026 06:05:24 +0000 (0:00:01.092) 0:22:10.391 ***** 2026-02-19 06:05:39.606654 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:39.606662 | orchestrator | 2026-02-19 06:05:39.606668 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:05:39.606675 | orchestrator | Thursday 19 February 2026 06:05:25 +0000 (0:00:01.136) 0:22:11.527 ***** 2026-02-19 06:05:39.606681 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:39.606686 | orchestrator | 2026-02-19 06:05:39.606692 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:05:39.606698 | orchestrator | Thursday 19 February 2026 06:05:26 +0000 (0:00:01.136) 0:22:12.663 ***** 2026-02-19 06:05:39.606704 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:39.606710 | orchestrator | 2026-02-19 06:05:39.606716 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:05:39.606723 | orchestrator | Thursday 19 February 2026 06:05:27 +0000 (0:00:01.163) 0:22:13.827 ***** 2026-02-19 06:05:39.606730 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:05:39.606737 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:05:39.606746 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:05:39.606756 | orchestrator | 2026-02-19 06:05:39.606765 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:05:39.606775 | orchestrator | Thursday 19 February 2026 06:05:29 +0000 (0:00:01.919) 0:22:15.747 ***** 2026-02-19 06:05:39.606784 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:39.606794 | orchestrator | 2026-02-19 06:05:39.606803 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:05:39.606812 | orchestrator | Thursday 19 February 2026 06:05:30 +0000 (0:00:01.238) 0:22:16.986 ***** 2026-02-19 06:05:39.606837 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:05:39.606853 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:05:39.606867 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:05:39.606874 | orchestrator | 2026-02-19 06:05:39.606881 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:05:39.606888 | orchestrator | Thursday 19 February 2026 06:05:33 +0000 (0:00:03.186) 0:22:20.172 ***** 2026-02-19 06:05:39.606895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 06:05:39.606901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 06:05:39.606908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 06:05:39.606917 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:39.606927 | orchestrator | 2026-02-19 06:05:39.606937 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:05:39.606946 | orchestrator | Thursday 19 February 2026 06:05:35 +0000 (0:00:01.734) 0:22:21.907 ***** 2026-02-19 06:05:39.606958 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:05:39.606970 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:05:39.606980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:05:39.606990 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:39.606998 | orchestrator | 2026-02-19 06:05:39.607004 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:05:39.607010 | orchestrator | Thursday 19 February 2026 06:05:37 +0000 (0:00:01.623) 0:22:23.530 ***** 2026-02-19 06:05:39.607018 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:39.607026 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:39.607032 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:39.607039 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:39.607044 | orchestrator | 2026-02-19 06:05:39.607050 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:05:39.607056 | orchestrator | Thursday 19 February 2026 06:05:38 +0000 (0:00:01.145) 0:22:24.676 ***** 2026-02-19 06:05:39.607064 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:05:31.617772', 'end': '2026-02-19 06:05:31.672139', 'delta': '0:00:00.054367', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:05:39.607088 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:05:32.207555', 'end': '2026-02-19 06:05:32.257521', 'delta': '0:00:00.049966', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:05:57.788587 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:05:32.753697', 'end': '2026-02-19 06:05:32.802315', 'delta': '0:00:00.048618', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:05:57.788716 | orchestrator | 2026-02-19 06:05:57.788746 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:05:57.788769 | orchestrator | Thursday 19 February 2026 06:05:39 +0000 (0:00:01.140) 0:22:25.817 ***** 2026-02-19 06:05:57.788789 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:57.788809 | orchestrator | 2026-02-19 06:05:57.788828 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:05:57.788843 | orchestrator | Thursday 19 February 2026 06:05:40 +0000 (0:00:01.223) 0:22:27.041 ***** 2026-02-19 06:05:57.788854 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.788866 | orchestrator | 2026-02-19 06:05:57.788880 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:05:57.788899 | orchestrator | Thursday 19 February 2026 06:05:42 +0000 (0:00:01.207) 0:22:28.248 ***** 2026-02-19 06:05:57.788918 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:57.788936 | orchestrator | 2026-02-19 06:05:57.788955 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:05:57.788972 | orchestrator | Thursday 19 February 2026 06:05:43 +0000 (0:00:01.139) 0:22:29.388 ***** 2026-02-19 06:05:57.788990 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:57.789008 | orchestrator | 2026-02-19 06:05:57.789026 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:05:57.789038 | orchestrator | Thursday 19 February 2026 06:05:45 +0000 (0:00:01.999) 0:22:31.387 ***** 2026-02-19 06:05:57.789048 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:05:57.789058 | orchestrator | 2026-02-19 06:05:57.789068 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:05:57.789078 | orchestrator | Thursday 19 February 2026 06:05:46 +0000 (0:00:01.192) 0:22:32.580 ***** 2026-02-19 06:05:57.789087 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789097 | orchestrator | 2026-02-19 06:05:57.789107 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:05:57.789117 | orchestrator | Thursday 19 February 2026 06:05:47 +0000 (0:00:01.095) 0:22:33.675 ***** 2026-02-19 06:05:57.789151 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789161 | orchestrator | 2026-02-19 06:05:57.789171 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:05:57.789181 | orchestrator | Thursday 19 February 2026 06:05:48 +0000 (0:00:01.217) 0:22:34.892 ***** 2026-02-19 06:05:57.789190 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789200 | orchestrator | 2026-02-19 06:05:57.789210 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:05:57.789219 | orchestrator | Thursday 19 February 2026 06:05:49 +0000 (0:00:01.088) 0:22:35.980 ***** 2026-02-19 06:05:57.789229 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789238 | orchestrator | 2026-02-19 06:05:57.789253 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:05:57.789269 | orchestrator | Thursday 19 February 2026 06:05:50 +0000 (0:00:01.107) 0:22:37.088 ***** 2026-02-19 06:05:57.789316 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789334 | orchestrator | 2026-02-19 06:05:57.789351 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:05:57.789366 | orchestrator | Thursday 19 February 2026 06:05:51 +0000 (0:00:01.102) 0:22:38.190 ***** 2026-02-19 06:05:57.789383 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789395 | orchestrator | 2026-02-19 06:05:57.789404 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:05:57.789414 | orchestrator | Thursday 19 February 2026 06:05:53 +0000 (0:00:01.128) 0:22:39.319 ***** 2026-02-19 06:05:57.789424 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789433 | orchestrator | 2026-02-19 06:05:57.789443 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:05:57.789453 | orchestrator | Thursday 19 February 2026 06:05:54 +0000 (0:00:01.145) 0:22:40.464 ***** 2026-02-19 06:05:57.789462 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789472 | orchestrator | 2026-02-19 06:05:57.789482 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:05:57.789493 | orchestrator | Thursday 19 February 2026 06:05:55 +0000 (0:00:01.134) 0:22:41.598 ***** 2026-02-19 06:05:57.789517 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:57.789528 | orchestrator | 2026-02-19 06:05:57.789538 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:05:57.789547 | orchestrator | Thursday 19 February 2026 06:05:56 +0000 (0:00:01.160) 0:22:42.758 ***** 2026-02-19 06:05:57.789578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:05:57.789591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:05:57.789601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:05:57.789612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:05:57.789634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:05:57.789644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:05:57.789655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:05:57.789683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d17f80a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:05:59.015827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:05:59.016029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:05:59.016063 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:05:59.016085 | orchestrator | 2026-02-19 06:05:59.016108 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:05:59.016130 | orchestrator | Thursday 19 February 2026 06:05:57 +0000 (0:00:01.238) 0:22:43.997 ***** 2026-02-19 06:05:59.016153 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:59.016202 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:59.016242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:59.016265 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:59.016345 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:59.016386 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:59.016401 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:59.016427 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d17f80a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:05:59.016459 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:06:38.296549 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:06:38.296693 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.296711 | orchestrator | 2026-02-19 06:06:38.296725 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:06:38.296738 | orchestrator | Thursday 19 February 2026 06:05:59 +0000 (0:00:01.230) 0:22:45.228 ***** 2026-02-19 06:06:38.296749 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:06:38.296760 | orchestrator | 2026-02-19 06:06:38.296772 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:06:38.296783 | orchestrator | Thursday 19 February 2026 06:06:00 +0000 (0:00:01.519) 0:22:46.747 ***** 2026-02-19 06:06:38.296794 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:06:38.296805 | orchestrator | 2026-02-19 06:06:38.296816 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:06:38.296827 | orchestrator | Thursday 19 February 2026 06:06:01 +0000 (0:00:01.131) 0:22:47.879 ***** 2026-02-19 06:06:38.296838 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:06:38.296849 | orchestrator | 2026-02-19 06:06:38.296860 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:06:38.296871 | orchestrator | Thursday 19 February 2026 06:06:03 +0000 (0:00:01.501) 0:22:49.380 ***** 2026-02-19 06:06:38.296886 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.296905 | orchestrator | 2026-02-19 06:06:38.296923 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:06:38.296942 | orchestrator | Thursday 19 February 2026 06:06:04 +0000 (0:00:01.140) 0:22:50.521 ***** 2026-02-19 06:06:38.296957 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.296969 | orchestrator | 2026-02-19 06:06:38.296979 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:06:38.296990 | orchestrator | Thursday 19 February 2026 06:06:05 +0000 (0:00:01.254) 0:22:51.775 ***** 2026-02-19 06:06:38.297001 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.297012 | orchestrator | 2026-02-19 06:06:38.297023 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:06:38.297082 | orchestrator | Thursday 19 February 2026 06:06:06 +0000 (0:00:01.137) 0:22:52.913 ***** 2026-02-19 06:06:38.297097 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:06:38.297110 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 06:06:38.297123 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 06:06:38.297136 | orchestrator | 2026-02-19 06:06:38.297149 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:06:38.297162 | orchestrator | Thursday 19 February 2026 06:06:08 +0000 (0:00:01.954) 0:22:54.868 ***** 2026-02-19 06:06:38.297174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 06:06:38.297227 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 06:06:38.297241 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 06:06:38.297253 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.297266 | orchestrator | 2026-02-19 06:06:38.297278 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:06:38.297292 | orchestrator | Thursday 19 February 2026 06:06:09 +0000 (0:00:01.154) 0:22:56.022 ***** 2026-02-19 06:06:38.297304 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.297355 | orchestrator | 2026-02-19 06:06:38.297368 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:06:38.297381 | orchestrator | Thursday 19 February 2026 06:06:10 +0000 (0:00:01.102) 0:22:57.124 ***** 2026-02-19 06:06:38.297393 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:06:38.297406 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:06:38.297419 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:06:38.297432 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:06:38.297444 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:06:38.297457 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:06:38.297468 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:06:38.297479 | orchestrator | 2026-02-19 06:06:38.297489 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:06:38.297500 | orchestrator | Thursday 19 February 2026 06:06:12 +0000 (0:00:01.865) 0:22:58.989 ***** 2026-02-19 06:06:38.297511 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:06:38.297522 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:06:38.297534 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:06:38.297553 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:06:38.297594 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:06:38.297613 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:06:38.297630 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:06:38.297648 | orchestrator | 2026-02-19 06:06:38.297666 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:06:38.297685 | orchestrator | Thursday 19 February 2026 06:06:15 +0000 (0:00:02.564) 0:23:01.554 ***** 2026-02-19 06:06:38.297703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-19 06:06:38.297722 | orchestrator | 2026-02-19 06:06:38.297740 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:06:38.297759 | orchestrator | Thursday 19 February 2026 06:06:16 +0000 (0:00:01.149) 0:23:02.703 ***** 2026-02-19 06:06:38.297776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-19 06:06:38.297795 | orchestrator | 2026-02-19 06:06:38.297814 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:06:38.297831 | orchestrator | Thursday 19 February 2026 06:06:17 +0000 (0:00:01.119) 0:23:03.823 ***** 2026-02-19 06:06:38.297842 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:06:38.297853 | orchestrator | 2026-02-19 06:06:38.297864 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:06:38.297874 | orchestrator | Thursday 19 February 2026 06:06:19 +0000 (0:00:01.549) 0:23:05.373 ***** 2026-02-19 06:06:38.297885 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.297896 | orchestrator | 2026-02-19 06:06:38.297918 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:06:38.297929 | orchestrator | Thursday 19 February 2026 06:06:20 +0000 (0:00:01.130) 0:23:06.503 ***** 2026-02-19 06:06:38.297939 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.297950 | orchestrator | 2026-02-19 06:06:38.297961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:06:38.297972 | orchestrator | Thursday 19 February 2026 06:06:21 +0000 (0:00:01.137) 0:23:07.641 ***** 2026-02-19 06:06:38.297982 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.297993 | orchestrator | 2026-02-19 06:06:38.298004 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:06:38.298076 | orchestrator | Thursday 19 February 2026 06:06:22 +0000 (0:00:01.117) 0:23:08.758 ***** 2026-02-19 06:06:38.298092 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:06:38.298104 | orchestrator | 2026-02-19 06:06:38.298115 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:06:38.298125 | orchestrator | Thursday 19 February 2026 06:06:24 +0000 (0:00:01.555) 0:23:10.313 ***** 2026-02-19 06:06:38.298136 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.298147 | orchestrator | 2026-02-19 06:06:38.298158 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:06:38.298169 | orchestrator | Thursday 19 February 2026 06:06:25 +0000 (0:00:01.120) 0:23:11.434 ***** 2026-02-19 06:06:38.298179 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.298190 | orchestrator | 2026-02-19 06:06:38.298201 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:06:38.298212 | orchestrator | Thursday 19 February 2026 06:06:26 +0000 (0:00:01.134) 0:23:12.569 ***** 2026-02-19 06:06:38.298223 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:06:38.298234 | orchestrator | 2026-02-19 06:06:38.298245 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:06:38.298263 | orchestrator | Thursday 19 February 2026 06:06:27 +0000 (0:00:01.554) 0:23:14.124 ***** 2026-02-19 06:06:38.298275 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:06:38.298286 | orchestrator | 2026-02-19 06:06:38.298296 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:06:38.298338 | orchestrator | Thursday 19 February 2026 06:06:29 +0000 (0:00:01.518) 0:23:15.642 ***** 2026-02-19 06:06:38.298360 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.298377 | orchestrator | 2026-02-19 06:06:38.298388 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:06:38.298399 | orchestrator | Thursday 19 February 2026 06:06:30 +0000 (0:00:01.094) 0:23:16.737 ***** 2026-02-19 06:06:38.298410 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:06:38.298421 | orchestrator | 2026-02-19 06:06:38.298431 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:06:38.298442 | orchestrator | Thursday 19 February 2026 06:06:31 +0000 (0:00:01.125) 0:23:17.863 ***** 2026-02-19 06:06:38.298453 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.298464 | orchestrator | 2026-02-19 06:06:38.298474 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:06:38.298485 | orchestrator | Thursday 19 February 2026 06:06:32 +0000 (0:00:01.092) 0:23:18.956 ***** 2026-02-19 06:06:38.298496 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.298507 | orchestrator | 2026-02-19 06:06:38.298518 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:06:38.298528 | orchestrator | Thursday 19 February 2026 06:06:33 +0000 (0:00:01.115) 0:23:20.071 ***** 2026-02-19 06:06:38.298539 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.298554 | orchestrator | 2026-02-19 06:06:38.298573 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:06:38.298590 | orchestrator | Thursday 19 February 2026 06:06:34 +0000 (0:00:01.110) 0:23:21.181 ***** 2026-02-19 06:06:38.298608 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.298625 | orchestrator | 2026-02-19 06:06:38.298655 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:06:38.298674 | orchestrator | Thursday 19 February 2026 06:06:36 +0000 (0:00:01.111) 0:23:22.292 ***** 2026-02-19 06:06:38.298692 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:06:38.298710 | orchestrator | 2026-02-19 06:06:38.298727 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:06:38.298745 | orchestrator | Thursday 19 February 2026 06:06:37 +0000 (0:00:01.100) 0:23:23.393 ***** 2026-02-19 06:06:38.298780 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.219916 | orchestrator | 2026-02-19 06:07:26.220037 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:07:26.220047 | orchestrator | Thursday 19 February 2026 06:06:38 +0000 (0:00:01.115) 0:23:24.508 ***** 2026-02-19 06:07:26.220051 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.220057 | orchestrator | 2026-02-19 06:07:26.220061 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:07:26.220065 | orchestrator | Thursday 19 February 2026 06:06:39 +0000 (0:00:01.139) 0:23:25.648 ***** 2026-02-19 06:07:26.220069 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.220073 | orchestrator | 2026-02-19 06:07:26.220077 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:07:26.220081 | orchestrator | Thursday 19 February 2026 06:06:40 +0000 (0:00:01.161) 0:23:26.810 ***** 2026-02-19 06:07:26.220085 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220090 | orchestrator | 2026-02-19 06:07:26.220094 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:07:26.220098 | orchestrator | Thursday 19 February 2026 06:06:41 +0000 (0:00:01.096) 0:23:27.906 ***** 2026-02-19 06:07:26.220102 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220106 | orchestrator | 2026-02-19 06:07:26.220110 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:07:26.220114 | orchestrator | Thursday 19 February 2026 06:06:42 +0000 (0:00:01.108) 0:23:29.015 ***** 2026-02-19 06:07:26.220118 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220121 | orchestrator | 2026-02-19 06:07:26.220125 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:07:26.220129 | orchestrator | Thursday 19 February 2026 06:06:43 +0000 (0:00:01.099) 0:23:30.115 ***** 2026-02-19 06:07:26.220133 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220136 | orchestrator | 2026-02-19 06:07:26.220140 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:07:26.220144 | orchestrator | Thursday 19 February 2026 06:06:44 +0000 (0:00:01.045) 0:23:31.160 ***** 2026-02-19 06:07:26.220148 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220151 | orchestrator | 2026-02-19 06:07:26.220155 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:07:26.220159 | orchestrator | Thursday 19 February 2026 06:06:45 +0000 (0:00:00.905) 0:23:32.066 ***** 2026-02-19 06:07:26.220163 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220167 | orchestrator | 2026-02-19 06:07:26.220170 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:07:26.220174 | orchestrator | Thursday 19 February 2026 06:06:46 +0000 (0:00:01.112) 0:23:33.178 ***** 2026-02-19 06:07:26.220178 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220182 | orchestrator | 2026-02-19 06:07:26.220186 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:07:26.220192 | orchestrator | Thursday 19 February 2026 06:06:48 +0000 (0:00:01.095) 0:23:34.274 ***** 2026-02-19 06:07:26.220198 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220204 | orchestrator | 2026-02-19 06:07:26.220209 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:07:26.220215 | orchestrator | Thursday 19 February 2026 06:06:49 +0000 (0:00:01.076) 0:23:35.351 ***** 2026-02-19 06:07:26.220221 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220246 | orchestrator | 2026-02-19 06:07:26.220253 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:07:26.220258 | orchestrator | Thursday 19 February 2026 06:06:50 +0000 (0:00:01.076) 0:23:36.427 ***** 2026-02-19 06:07:26.220264 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220269 | orchestrator | 2026-02-19 06:07:26.220275 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:07:26.220281 | orchestrator | Thursday 19 February 2026 06:06:51 +0000 (0:00:01.082) 0:23:37.510 ***** 2026-02-19 06:07:26.220287 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220293 | orchestrator | 2026-02-19 06:07:26.220300 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:07:26.220306 | orchestrator | Thursday 19 February 2026 06:06:52 +0000 (0:00:01.087) 0:23:38.598 ***** 2026-02-19 06:07:26.220312 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220318 | orchestrator | 2026-02-19 06:07:26.220324 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:07:26.220331 | orchestrator | Thursday 19 February 2026 06:06:53 +0000 (0:00:01.102) 0:23:39.700 ***** 2026-02-19 06:07:26.220336 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.220399 | orchestrator | 2026-02-19 06:07:26.220406 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:07:26.220410 | orchestrator | Thursday 19 February 2026 06:06:55 +0000 (0:00:01.990) 0:23:41.691 ***** 2026-02-19 06:07:26.220413 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.220417 | orchestrator | 2026-02-19 06:07:26.220421 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:07:26.220425 | orchestrator | Thursday 19 February 2026 06:06:58 +0000 (0:00:02.535) 0:23:44.227 ***** 2026-02-19 06:07:26.220429 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-19 06:07:26.220434 | orchestrator | 2026-02-19 06:07:26.220438 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:07:26.220442 | orchestrator | Thursday 19 February 2026 06:06:59 +0000 (0:00:01.132) 0:23:45.360 ***** 2026-02-19 06:07:26.220446 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220450 | orchestrator | 2026-02-19 06:07:26.220454 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:07:26.220457 | orchestrator | Thursday 19 February 2026 06:07:00 +0000 (0:00:01.113) 0:23:46.473 ***** 2026-02-19 06:07:26.220461 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220465 | orchestrator | 2026-02-19 06:07:26.220469 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:07:26.220473 | orchestrator | Thursday 19 February 2026 06:07:01 +0000 (0:00:01.138) 0:23:47.612 ***** 2026-02-19 06:07:26.220488 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:07:26.220493 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:07:26.220496 | orchestrator | 2026-02-19 06:07:26.220500 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:07:26.220504 | orchestrator | Thursday 19 February 2026 06:07:03 +0000 (0:00:01.877) 0:23:49.490 ***** 2026-02-19 06:07:26.220508 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.220511 | orchestrator | 2026-02-19 06:07:26.220515 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:07:26.220519 | orchestrator | Thursday 19 February 2026 06:07:04 +0000 (0:00:01.534) 0:23:51.024 ***** 2026-02-19 06:07:26.220523 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220526 | orchestrator | 2026-02-19 06:07:26.220530 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:07:26.220534 | orchestrator | Thursday 19 February 2026 06:07:05 +0000 (0:00:01.155) 0:23:52.180 ***** 2026-02-19 06:07:26.220538 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220541 | orchestrator | 2026-02-19 06:07:26.220545 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:07:26.220556 | orchestrator | Thursday 19 February 2026 06:07:07 +0000 (0:00:01.144) 0:23:53.325 ***** 2026-02-19 06:07:26.220560 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220563 | orchestrator | 2026-02-19 06:07:26.220567 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:07:26.220599 | orchestrator | Thursday 19 February 2026 06:07:08 +0000 (0:00:01.111) 0:23:54.437 ***** 2026-02-19 06:07:26.220604 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-19 06:07:26.220607 | orchestrator | 2026-02-19 06:07:26.220611 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:07:26.220615 | orchestrator | Thursday 19 February 2026 06:07:09 +0000 (0:00:01.089) 0:23:55.526 ***** 2026-02-19 06:07:26.220619 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.220625 | orchestrator | 2026-02-19 06:07:26.220632 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:07:26.220638 | orchestrator | Thursday 19 February 2026 06:07:11 +0000 (0:00:01.747) 0:23:57.274 ***** 2026-02-19 06:07:26.220644 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:07:26.220650 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:07:26.220656 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:07:26.220662 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220668 | orchestrator | 2026-02-19 06:07:26.220671 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:07:26.220675 | orchestrator | Thursday 19 February 2026 06:07:12 +0000 (0:00:01.167) 0:23:58.442 ***** 2026-02-19 06:07:26.220679 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220682 | orchestrator | 2026-02-19 06:07:26.220686 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:07:26.220690 | orchestrator | Thursday 19 February 2026 06:07:13 +0000 (0:00:01.093) 0:23:59.535 ***** 2026-02-19 06:07:26.220693 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220697 | orchestrator | 2026-02-19 06:07:26.220701 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:07:26.220705 | orchestrator | Thursday 19 February 2026 06:07:14 +0000 (0:00:01.140) 0:24:00.676 ***** 2026-02-19 06:07:26.220708 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220712 | orchestrator | 2026-02-19 06:07:26.220719 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:07:26.220723 | orchestrator | Thursday 19 February 2026 06:07:15 +0000 (0:00:01.152) 0:24:01.829 ***** 2026-02-19 06:07:26.220726 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220730 | orchestrator | 2026-02-19 06:07:26.220734 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:07:26.220737 | orchestrator | Thursday 19 February 2026 06:07:16 +0000 (0:00:01.143) 0:24:02.972 ***** 2026-02-19 06:07:26.220741 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220745 | orchestrator | 2026-02-19 06:07:26.220749 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:07:26.220753 | orchestrator | Thursday 19 February 2026 06:07:17 +0000 (0:00:01.105) 0:24:04.078 ***** 2026-02-19 06:07:26.220756 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.220760 | orchestrator | 2026-02-19 06:07:26.220764 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:07:26.220767 | orchestrator | Thursday 19 February 2026 06:07:20 +0000 (0:00:02.589) 0:24:06.667 ***** 2026-02-19 06:07:26.220771 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:07:26.220775 | orchestrator | 2026-02-19 06:07:26.220779 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:07:26.220782 | orchestrator | Thursday 19 February 2026 06:07:21 +0000 (0:00:01.110) 0:24:07.778 ***** 2026-02-19 06:07:26.220786 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-19 06:07:26.220794 | orchestrator | 2026-02-19 06:07:26.220797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:07:26.220801 | orchestrator | Thursday 19 February 2026 06:07:22 +0000 (0:00:01.183) 0:24:08.962 ***** 2026-02-19 06:07:26.220805 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220809 | orchestrator | 2026-02-19 06:07:26.220812 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:07:26.220816 | orchestrator | Thursday 19 February 2026 06:07:23 +0000 (0:00:01.132) 0:24:10.094 ***** 2026-02-19 06:07:26.220820 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220824 | orchestrator | 2026-02-19 06:07:26.220827 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:07:26.220831 | orchestrator | Thursday 19 February 2026 06:07:25 +0000 (0:00:01.132) 0:24:11.227 ***** 2026-02-19 06:07:26.220835 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:07:26.220839 | orchestrator | 2026-02-19 06:07:26.220846 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:08:09.682234 | orchestrator | Thursday 19 February 2026 06:07:26 +0000 (0:00:01.204) 0:24:12.432 ***** 2026-02-19 06:08:09.682345 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.682363 | orchestrator | 2026-02-19 06:08:09.682448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:08:09.682463 | orchestrator | Thursday 19 February 2026 06:07:27 +0000 (0:00:01.223) 0:24:13.655 ***** 2026-02-19 06:08:09.682474 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.682485 | orchestrator | 2026-02-19 06:08:09.682496 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:08:09.682507 | orchestrator | Thursday 19 February 2026 06:07:28 +0000 (0:00:01.149) 0:24:14.804 ***** 2026-02-19 06:08:09.682517 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.682528 | orchestrator | 2026-02-19 06:08:09.682539 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:08:09.682550 | orchestrator | Thursday 19 February 2026 06:07:29 +0000 (0:00:01.153) 0:24:15.958 ***** 2026-02-19 06:08:09.682560 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.682571 | orchestrator | 2026-02-19 06:08:09.682581 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:08:09.682592 | orchestrator | Thursday 19 February 2026 06:07:30 +0000 (0:00:01.138) 0:24:17.097 ***** 2026-02-19 06:08:09.682602 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.682612 | orchestrator | 2026-02-19 06:08:09.682623 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:08:09.682634 | orchestrator | Thursday 19 February 2026 06:07:32 +0000 (0:00:01.134) 0:24:18.232 ***** 2026-02-19 06:08:09.682645 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:08:09.682656 | orchestrator | 2026-02-19 06:08:09.682667 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:08:09.682678 | orchestrator | Thursday 19 February 2026 06:07:33 +0000 (0:00:01.134) 0:24:19.366 ***** 2026-02-19 06:08:09.682688 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-19 06:08:09.682700 | orchestrator | 2026-02-19 06:08:09.682710 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:08:09.682721 | orchestrator | Thursday 19 February 2026 06:07:34 +0000 (0:00:01.093) 0:24:20.460 ***** 2026-02-19 06:08:09.682732 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-19 06:08:09.682743 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-19 06:08:09.682754 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-19 06:08:09.682765 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-19 06:08:09.682776 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-19 06:08:09.682786 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-19 06:08:09.682796 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-19 06:08:09.682833 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:08:09.682845 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:08:09.682856 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:08:09.682867 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:08:09.682878 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:08:09.682888 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:08:09.682913 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:08:09.682926 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-19 06:08:09.682936 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-19 06:08:09.682947 | orchestrator | 2026-02-19 06:08:09.682958 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:08:09.682968 | orchestrator | Thursday 19 February 2026 06:07:41 +0000 (0:00:07.309) 0:24:27.770 ***** 2026-02-19 06:08:09.682979 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.682989 | orchestrator | 2026-02-19 06:08:09.683000 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:08:09.683012 | orchestrator | Thursday 19 February 2026 06:07:42 +0000 (0:00:01.120) 0:24:28.891 ***** 2026-02-19 06:08:09.683023 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683034 | orchestrator | 2026-02-19 06:08:09.683045 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:08:09.683055 | orchestrator | Thursday 19 February 2026 06:07:43 +0000 (0:00:01.093) 0:24:29.984 ***** 2026-02-19 06:08:09.683066 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683077 | orchestrator | 2026-02-19 06:08:09.683087 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:08:09.683097 | orchestrator | Thursday 19 February 2026 06:07:44 +0000 (0:00:01.125) 0:24:31.109 ***** 2026-02-19 06:08:09.683107 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683118 | orchestrator | 2026-02-19 06:08:09.683128 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:08:09.683138 | orchestrator | Thursday 19 February 2026 06:07:46 +0000 (0:00:01.167) 0:24:32.277 ***** 2026-02-19 06:08:09.683147 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683157 | orchestrator | 2026-02-19 06:08:09.683167 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:08:09.683177 | orchestrator | Thursday 19 February 2026 06:07:47 +0000 (0:00:01.115) 0:24:33.393 ***** 2026-02-19 06:08:09.683188 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683198 | orchestrator | 2026-02-19 06:08:09.683209 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:08:09.683220 | orchestrator | Thursday 19 February 2026 06:07:48 +0000 (0:00:01.185) 0:24:34.578 ***** 2026-02-19 06:08:09.683231 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683241 | orchestrator | 2026-02-19 06:08:09.683270 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:08:09.683281 | orchestrator | Thursday 19 February 2026 06:07:49 +0000 (0:00:01.105) 0:24:35.684 ***** 2026-02-19 06:08:09.683293 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683303 | orchestrator | 2026-02-19 06:08:09.683314 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:08:09.683324 | orchestrator | Thursday 19 February 2026 06:07:50 +0000 (0:00:01.148) 0:24:36.832 ***** 2026-02-19 06:08:09.683334 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683344 | orchestrator | 2026-02-19 06:08:09.683355 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:08:09.683365 | orchestrator | Thursday 19 February 2026 06:07:51 +0000 (0:00:01.111) 0:24:37.944 ***** 2026-02-19 06:08:09.683405 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683418 | orchestrator | 2026-02-19 06:08:09.683429 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:08:09.683440 | orchestrator | Thursday 19 February 2026 06:07:52 +0000 (0:00:01.110) 0:24:39.055 ***** 2026-02-19 06:08:09.683451 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683462 | orchestrator | 2026-02-19 06:08:09.683473 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:08:09.683484 | orchestrator | Thursday 19 February 2026 06:07:53 +0000 (0:00:01.130) 0:24:40.185 ***** 2026-02-19 06:08:09.683495 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683506 | orchestrator | 2026-02-19 06:08:09.683517 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:08:09.683528 | orchestrator | Thursday 19 February 2026 06:07:55 +0000 (0:00:01.124) 0:24:41.310 ***** 2026-02-19 06:08:09.683539 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683550 | orchestrator | 2026-02-19 06:08:09.683561 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:08:09.683572 | orchestrator | Thursday 19 February 2026 06:07:56 +0000 (0:00:01.291) 0:24:42.601 ***** 2026-02-19 06:08:09.683583 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683595 | orchestrator | 2026-02-19 06:08:09.683606 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:08:09.683618 | orchestrator | Thursday 19 February 2026 06:07:57 +0000 (0:00:01.131) 0:24:43.733 ***** 2026-02-19 06:08:09.683629 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683639 | orchestrator | 2026-02-19 06:08:09.683651 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:08:09.683661 | orchestrator | Thursday 19 February 2026 06:07:58 +0000 (0:00:01.223) 0:24:44.957 ***** 2026-02-19 06:08:09.683672 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683683 | orchestrator | 2026-02-19 06:08:09.683694 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:08:09.683705 | orchestrator | Thursday 19 February 2026 06:07:59 +0000 (0:00:01.137) 0:24:46.094 ***** 2026-02-19 06:08:09.683717 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683728 | orchestrator | 2026-02-19 06:08:09.683739 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:08:09.683752 | orchestrator | Thursday 19 February 2026 06:08:00 +0000 (0:00:01.126) 0:24:47.220 ***** 2026-02-19 06:08:09.683763 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683774 | orchestrator | 2026-02-19 06:08:09.683786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:08:09.683803 | orchestrator | Thursday 19 February 2026 06:08:02 +0000 (0:00:01.145) 0:24:48.366 ***** 2026-02-19 06:08:09.683815 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683826 | orchestrator | 2026-02-19 06:08:09.683837 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:08:09.683848 | orchestrator | Thursday 19 February 2026 06:08:03 +0000 (0:00:01.113) 0:24:49.479 ***** 2026-02-19 06:08:09.683859 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683870 | orchestrator | 2026-02-19 06:08:09.683881 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:08:09.683892 | orchestrator | Thursday 19 February 2026 06:08:04 +0000 (0:00:01.107) 0:24:50.586 ***** 2026-02-19 06:08:09.683903 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.683914 | orchestrator | 2026-02-19 06:08:09.683925 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:08:09.683936 | orchestrator | Thursday 19 February 2026 06:08:05 +0000 (0:00:01.148) 0:24:51.735 ***** 2026-02-19 06:08:09.683947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 06:08:09.683958 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 06:08:09.683977 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 06:08:09.683989 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.684000 | orchestrator | 2026-02-19 06:08:09.684011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:08:09.684022 | orchestrator | Thursday 19 February 2026 06:08:06 +0000 (0:00:01.414) 0:24:53.150 ***** 2026-02-19 06:08:09.684033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 06:08:09.684044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 06:08:09.684057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 06:08:09.684067 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.684079 | orchestrator | 2026-02-19 06:08:09.684087 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:08:09.684094 | orchestrator | Thursday 19 February 2026 06:08:08 +0000 (0:00:01.357) 0:24:54.508 ***** 2026-02-19 06:08:09.684100 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 06:08:09.684106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 06:08:09.684112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-19 06:08:09.684118 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:08:09.684128 | orchestrator | 2026-02-19 06:08:09.684141 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:09:13.424285 | orchestrator | Thursday 19 February 2026 06:08:09 +0000 (0:00:01.381) 0:24:55.889 ***** 2026-02-19 06:09:13.424406 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:09:13.424441 | orchestrator | 2026-02-19 06:09:13.424452 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:09:13.424460 | orchestrator | Thursday 19 February 2026 06:08:10 +0000 (0:00:01.121) 0:24:57.011 ***** 2026-02-19 06:09:13.424469 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-19 06:09:13.424476 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:09:13.424484 | orchestrator | 2026-02-19 06:09:13.424491 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:09:13.424499 | orchestrator | Thursday 19 February 2026 06:08:12 +0000 (0:00:01.271) 0:24:58.282 ***** 2026-02-19 06:09:13.424507 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:09:13.424515 | orchestrator | 2026-02-19 06:09:13.424522 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:09:13.424530 | orchestrator | Thursday 19 February 2026 06:08:13 +0000 (0:00:01.724) 0:25:00.007 ***** 2026-02-19 06:09:13.424537 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:09:13.424545 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:09:13.424554 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:09:13.424561 | orchestrator | 2026-02-19 06:09:13.424569 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-19 06:09:13.424576 | orchestrator | Thursday 19 February 2026 06:08:15 +0000 (0:00:01.644) 0:25:01.651 ***** 2026-02-19 06:09:13.424583 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-19 06:09:13.424591 | orchestrator | 2026-02-19 06:09:13.424598 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-19 06:09:13.424605 | orchestrator | Thursday 19 February 2026 06:08:16 +0000 (0:00:01.517) 0:25:03.168 ***** 2026-02-19 06:09:13.424613 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:09:13.424620 | orchestrator | 2026-02-19 06:09:13.424628 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-19 06:09:13.424635 | orchestrator | Thursday 19 February 2026 06:08:18 +0000 (0:00:01.488) 0:25:04.657 ***** 2026-02-19 06:09:13.424642 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:09:13.424650 | orchestrator | 2026-02-19 06:09:13.424657 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-19 06:09:13.424665 | orchestrator | Thursday 19 February 2026 06:08:19 +0000 (0:00:01.119) 0:25:05.777 ***** 2026-02-19 06:09:13.424693 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-19 06:09:13.424701 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-19 06:09:13.424718 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-19 06:09:13.424726 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-19 06:09:13.424733 | orchestrator | 2026-02-19 06:09:13.424740 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-19 06:09:13.424755 | orchestrator | Thursday 19 February 2026 06:08:27 +0000 (0:00:07.945) 0:25:13.722 ***** 2026-02-19 06:09:13.424763 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:09:13.424770 | orchestrator | 2026-02-19 06:09:13.424778 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-19 06:09:13.424785 | orchestrator | Thursday 19 February 2026 06:08:28 +0000 (0:00:01.153) 0:25:14.876 ***** 2026-02-19 06:09:13.424805 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-19 06:09:13.424813 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 06:09:13.424821 | orchestrator | 2026-02-19 06:09:13.424830 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:09:13.424838 | orchestrator | Thursday 19 February 2026 06:08:31 +0000 (0:00:03.242) 0:25:18.118 ***** 2026-02-19 06:09:13.424847 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-19 06:09:13.424855 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-19 06:09:13.424863 | orchestrator | 2026-02-19 06:09:13.424872 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-19 06:09:13.424880 | orchestrator | Thursday 19 February 2026 06:08:34 +0000 (0:00:02.123) 0:25:20.242 ***** 2026-02-19 06:09:13.424888 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:09:13.424897 | orchestrator | 2026-02-19 06:09:13.424905 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-19 06:09:13.424915 | orchestrator | Thursday 19 February 2026 06:08:35 +0000 (0:00:01.517) 0:25:21.760 ***** 2026-02-19 06:09:13.424924 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:09:13.424932 | orchestrator | 2026-02-19 06:09:13.424939 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-19 06:09:13.424946 | orchestrator | Thursday 19 February 2026 06:08:36 +0000 (0:00:01.130) 0:25:22.890 ***** 2026-02-19 06:09:13.424954 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:09:13.424961 | orchestrator | 2026-02-19 06:09:13.424968 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-19 06:09:13.424975 | orchestrator | Thursday 19 February 2026 06:08:37 +0000 (0:00:01.118) 0:25:24.009 ***** 2026-02-19 06:09:13.424983 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-19 06:09:13.424990 | orchestrator | 2026-02-19 06:09:13.424997 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-19 06:09:13.425004 | orchestrator | Thursday 19 February 2026 06:08:39 +0000 (0:00:01.448) 0:25:25.458 ***** 2026-02-19 06:09:13.425012 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:09:13.425019 | orchestrator | 2026-02-19 06:09:13.425026 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-19 06:09:13.425033 | orchestrator | Thursday 19 February 2026 06:08:40 +0000 (0:00:01.135) 0:25:26.593 ***** 2026-02-19 06:09:13.425041 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:09:13.425048 | orchestrator | 2026-02-19 06:09:13.425056 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-19 06:09:13.425077 | orchestrator | Thursday 19 February 2026 06:08:41 +0000 (0:00:01.129) 0:25:27.723 ***** 2026-02-19 06:09:13.425085 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-19 06:09:13.425092 | orchestrator | 2026-02-19 06:09:13.425100 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-19 06:09:13.425107 | orchestrator | Thursday 19 February 2026 06:08:42 +0000 (0:00:01.421) 0:25:29.145 ***** 2026-02-19 06:09:13.425120 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:09:13.425127 | orchestrator | 2026-02-19 06:09:13.425135 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-19 06:09:13.425142 | orchestrator | Thursday 19 February 2026 06:08:44 +0000 (0:00:02.013) 0:25:31.159 ***** 2026-02-19 06:09:13.425149 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:09:13.425157 | orchestrator | 2026-02-19 06:09:13.425164 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-19 06:09:13.425171 | orchestrator | Thursday 19 February 2026 06:08:46 +0000 (0:00:01.989) 0:25:33.148 ***** 2026-02-19 06:09:13.425178 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:09:13.425186 | orchestrator | 2026-02-19 06:09:13.425193 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-19 06:09:13.425200 | orchestrator | Thursday 19 February 2026 06:08:49 +0000 (0:00:02.558) 0:25:35.707 ***** 2026-02-19 06:09:13.425207 | orchestrator | changed: [testbed-node-0] 2026-02-19 06:09:13.425215 | orchestrator | 2026-02-19 06:09:13.425222 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-19 06:09:13.425229 | orchestrator | Thursday 19 February 2026 06:08:53 +0000 (0:00:04.071) 0:25:39.778 ***** 2026-02-19 06:09:13.425237 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:09:13.425244 | orchestrator | 2026-02-19 06:09:13.425251 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-19 06:09:13.425258 | orchestrator | 2026-02-19 06:09:13.425266 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-19 06:09:13.425273 | orchestrator | Thursday 19 February 2026 06:08:54 +0000 (0:00:01.317) 0:25:41.095 ***** 2026-02-19 06:09:13.425280 | orchestrator | changed: [testbed-node-1] 2026-02-19 06:09:13.425288 | orchestrator | 2026-02-19 06:09:13.425295 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-19 06:09:13.425302 | orchestrator | Thursday 19 February 2026 06:08:57 +0000 (0:00:02.560) 0:25:43.656 ***** 2026-02-19 06:09:13.425309 | orchestrator | changed: [testbed-node-1] 2026-02-19 06:09:13.425317 | orchestrator | 2026-02-19 06:09:13.425324 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:09:13.425331 | orchestrator | Thursday 19 February 2026 06:08:59 +0000 (0:00:02.225) 0:25:45.882 ***** 2026-02-19 06:09:13.425342 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-19 06:09:13.425354 | orchestrator | 2026-02-19 06:09:13.425368 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:09:13.425381 | orchestrator | Thursday 19 February 2026 06:09:00 +0000 (0:00:01.115) 0:25:46.997 ***** 2026-02-19 06:09:13.425394 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:13.425407 | orchestrator | 2026-02-19 06:09:13.425419 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:09:13.425449 | orchestrator | Thursday 19 February 2026 06:09:02 +0000 (0:00:01.466) 0:25:48.463 ***** 2026-02-19 06:09:13.425461 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:13.425472 | orchestrator | 2026-02-19 06:09:13.425484 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:09:13.425502 | orchestrator | Thursday 19 February 2026 06:09:03 +0000 (0:00:01.110) 0:25:49.574 ***** 2026-02-19 06:09:13.425514 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:13.425526 | orchestrator | 2026-02-19 06:09:13.425538 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:09:13.425551 | orchestrator | Thursday 19 February 2026 06:09:04 +0000 (0:00:01.494) 0:25:51.069 ***** 2026-02-19 06:09:13.425564 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:13.425576 | orchestrator | 2026-02-19 06:09:13.425588 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:09:13.425596 | orchestrator | Thursday 19 February 2026 06:09:05 +0000 (0:00:01.137) 0:25:52.206 ***** 2026-02-19 06:09:13.425603 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:13.425610 | orchestrator | 2026-02-19 06:09:13.425618 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:09:13.425632 | orchestrator | Thursday 19 February 2026 06:09:07 +0000 (0:00:01.121) 0:25:53.328 ***** 2026-02-19 06:09:13.425639 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:13.425646 | orchestrator | 2026-02-19 06:09:13.425655 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:09:13.425667 | orchestrator | Thursday 19 February 2026 06:09:08 +0000 (0:00:01.174) 0:25:54.503 ***** 2026-02-19 06:09:13.425679 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:13.425691 | orchestrator | 2026-02-19 06:09:13.425702 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:09:13.425714 | orchestrator | Thursday 19 February 2026 06:09:09 +0000 (0:00:01.138) 0:25:55.641 ***** 2026-02-19 06:09:13.425727 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:13.425739 | orchestrator | 2026-02-19 06:09:13.425751 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:09:13.425764 | orchestrator | Thursday 19 February 2026 06:09:10 +0000 (0:00:01.107) 0:25:56.749 ***** 2026-02-19 06:09:13.425776 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:09:13.425787 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 06:09:13.425795 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:09:13.425802 | orchestrator | 2026-02-19 06:09:13.425809 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:09:13.425816 | orchestrator | Thursday 19 February 2026 06:09:12 +0000 (0:00:01.647) 0:25:58.396 ***** 2026-02-19 06:09:13.425824 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:13.425831 | orchestrator | 2026-02-19 06:09:13.425838 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:09:13.425852 | orchestrator | Thursday 19 February 2026 06:09:13 +0000 (0:00:01.235) 0:25:59.632 ***** 2026-02-19 06:09:37.494664 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:09:37.494775 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 06:09:37.494789 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:09:37.494799 | orchestrator | 2026-02-19 06:09:37.494809 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:09:37.494820 | orchestrator | Thursday 19 February 2026 06:09:16 +0000 (0:00:02.956) 0:26:02.589 ***** 2026-02-19 06:09:37.494829 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 06:09:37.494839 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 06:09:37.494847 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 06:09:37.494856 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.494865 | orchestrator | 2026-02-19 06:09:37.494874 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:09:37.494883 | orchestrator | Thursday 19 February 2026 06:09:17 +0000 (0:00:01.418) 0:26:04.007 ***** 2026-02-19 06:09:37.494894 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:09:37.494906 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:09:37.494915 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:09:37.494924 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.494933 | orchestrator | 2026-02-19 06:09:37.494962 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:09:37.494971 | orchestrator | Thursday 19 February 2026 06:09:19 +0000 (0:00:01.624) 0:26:05.631 ***** 2026-02-19 06:09:37.494982 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:37.495007 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:37.495017 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:37.495026 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495034 | orchestrator | 2026-02-19 06:09:37.495043 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:09:37.495052 | orchestrator | Thursday 19 February 2026 06:09:20 +0000 (0:00:01.169) 0:26:06.801 ***** 2026-02-19 06:09:37.495062 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:09:13.930439', 'end': '2026-02-19 06:09:13.994448', 'delta': '0:00:00.064009', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:09:37.495089 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:09:14.515550', 'end': '2026-02-19 06:09:14.576572', 'delta': '0:00:00.061022', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:09:37.495100 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:09:15.127106', 'end': '2026-02-19 06:09:15.181704', 'delta': '0:00:00.054598', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:09:37.495116 | orchestrator | 2026-02-19 06:09:37.495125 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:09:37.495134 | orchestrator | Thursday 19 February 2026 06:09:21 +0000 (0:00:01.160) 0:26:07.962 ***** 2026-02-19 06:09:37.495143 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:37.495152 | orchestrator | 2026-02-19 06:09:37.495160 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:09:37.495169 | orchestrator | Thursday 19 February 2026 06:09:22 +0000 (0:00:01.223) 0:26:09.185 ***** 2026-02-19 06:09:37.495178 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495188 | orchestrator | 2026-02-19 06:09:37.495198 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:09:37.495208 | orchestrator | Thursday 19 February 2026 06:09:24 +0000 (0:00:01.224) 0:26:10.410 ***** 2026-02-19 06:09:37.495218 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:37.495228 | orchestrator | 2026-02-19 06:09:37.495237 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:09:37.495247 | orchestrator | Thursday 19 February 2026 06:09:25 +0000 (0:00:01.136) 0:26:11.547 ***** 2026-02-19 06:09:37.495257 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:09:37.495267 | orchestrator | 2026-02-19 06:09:37.495277 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:09:37.495294 | orchestrator | Thursday 19 February 2026 06:09:27 +0000 (0:00:02.019) 0:26:13.566 ***** 2026-02-19 06:09:37.495309 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:37.495324 | orchestrator | 2026-02-19 06:09:37.495337 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:09:37.495361 | orchestrator | Thursday 19 February 2026 06:09:28 +0000 (0:00:01.117) 0:26:14.684 ***** 2026-02-19 06:09:37.495376 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495391 | orchestrator | 2026-02-19 06:09:37.495405 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:09:37.495420 | orchestrator | Thursday 19 February 2026 06:09:29 +0000 (0:00:01.090) 0:26:15.775 ***** 2026-02-19 06:09:37.495434 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495470 | orchestrator | 2026-02-19 06:09:37.495485 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:09:37.495501 | orchestrator | Thursday 19 February 2026 06:09:30 +0000 (0:00:01.212) 0:26:16.987 ***** 2026-02-19 06:09:37.495517 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495532 | orchestrator | 2026-02-19 06:09:37.495547 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:09:37.495562 | orchestrator | Thursday 19 February 2026 06:09:31 +0000 (0:00:01.098) 0:26:18.086 ***** 2026-02-19 06:09:37.495577 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495592 | orchestrator | 2026-02-19 06:09:37.495607 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:09:37.495621 | orchestrator | Thursday 19 February 2026 06:09:32 +0000 (0:00:01.135) 0:26:19.221 ***** 2026-02-19 06:09:37.495636 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495651 | orchestrator | 2026-02-19 06:09:37.495666 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:09:37.495681 | orchestrator | Thursday 19 February 2026 06:09:34 +0000 (0:00:01.123) 0:26:20.345 ***** 2026-02-19 06:09:37.495695 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495710 | orchestrator | 2026-02-19 06:09:37.495725 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:09:37.495738 | orchestrator | Thursday 19 February 2026 06:09:35 +0000 (0:00:01.105) 0:26:21.451 ***** 2026-02-19 06:09:37.495754 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495768 | orchestrator | 2026-02-19 06:09:37.495783 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:09:37.495809 | orchestrator | Thursday 19 February 2026 06:09:36 +0000 (0:00:01.150) 0:26:22.601 ***** 2026-02-19 06:09:37.495824 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:37.495839 | orchestrator | 2026-02-19 06:09:37.495854 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:09:37.495880 | orchestrator | Thursday 19 February 2026 06:09:37 +0000 (0:00:01.104) 0:26:23.706 ***** 2026-02-19 06:09:41.034921 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:41.035036 | orchestrator | 2026-02-19 06:09:41.035052 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:09:41.035064 | orchestrator | Thursday 19 February 2026 06:09:38 +0000 (0:00:01.114) 0:26:24.820 ***** 2026-02-19 06:09:41.035076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:09:41.035091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:09:41.035101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:09:41.035113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:09:41.035143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:09:41.035154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:09:41.035164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:09:41.035253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5b78108', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:09:41.035269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:09:41.035280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:09:41.035295 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:41.035305 | orchestrator | 2026-02-19 06:09:41.035316 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:09:41.035326 | orchestrator | Thursday 19 February 2026 06:09:39 +0000 (0:00:01.232) 0:26:26.052 ***** 2026-02-19 06:09:41.035337 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:41.035348 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:41.035373 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:51.250339 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:51.250525 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:51.250546 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:51.250575 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:51.250691 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5b78108', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5b78108-03a3-45a4-88e7-9b1ec0e9e95a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:51.250735 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:51.250752 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:09:51.250764 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:51.250778 | orchestrator | 2026-02-19 06:09:51.250790 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:09:51.250801 | orchestrator | Thursday 19 February 2026 06:09:41 +0000 (0:00:01.199) 0:26:27.252 ***** 2026-02-19 06:09:51.250811 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:51.250822 | orchestrator | 2026-02-19 06:09:51.250831 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:09:51.250841 | orchestrator | Thursday 19 February 2026 06:09:42 +0000 (0:00:01.506) 0:26:28.758 ***** 2026-02-19 06:09:51.250858 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:51.250868 | orchestrator | 2026-02-19 06:09:51.250878 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:09:51.250889 | orchestrator | Thursday 19 February 2026 06:09:43 +0000 (0:00:01.124) 0:26:29.882 ***** 2026-02-19 06:09:51.250900 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:09:51.250911 | orchestrator | 2026-02-19 06:09:51.250921 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:09:51.250932 | orchestrator | Thursday 19 February 2026 06:09:45 +0000 (0:00:01.494) 0:26:31.377 ***** 2026-02-19 06:09:51.250943 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:51.250954 | orchestrator | 2026-02-19 06:09:51.250964 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:09:51.250975 | orchestrator | Thursday 19 February 2026 06:09:46 +0000 (0:00:01.145) 0:26:32.522 ***** 2026-02-19 06:09:51.250986 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:51.250997 | orchestrator | 2026-02-19 06:09:51.251008 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:09:51.251019 | orchestrator | Thursday 19 February 2026 06:09:47 +0000 (0:00:01.214) 0:26:33.737 ***** 2026-02-19 06:09:51.251030 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:51.251041 | orchestrator | 2026-02-19 06:09:51.251053 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:09:51.251064 | orchestrator | Thursday 19 February 2026 06:09:48 +0000 (0:00:01.150) 0:26:34.888 ***** 2026-02-19 06:09:51.251075 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-19 06:09:51.251087 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 06:09:51.251096 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-19 06:09:51.251106 | orchestrator | 2026-02-19 06:09:51.251116 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:09:51.251125 | orchestrator | Thursday 19 February 2026 06:09:50 +0000 (0:00:01.624) 0:26:36.512 ***** 2026-02-19 06:09:51.251135 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-19 06:09:51.251145 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-19 06:09:51.251154 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-19 06:09:51.251164 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:09:51.251173 | orchestrator | 2026-02-19 06:09:51.251191 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:10:26.231824 | orchestrator | Thursday 19 February 2026 06:09:51 +0000 (0:00:00.948) 0:26:37.460 ***** 2026-02-19 06:10:26.231964 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.231989 | orchestrator | 2026-02-19 06:10:26.232008 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:10:26.232027 | orchestrator | Thursday 19 February 2026 06:09:52 +0000 (0:00:00.909) 0:26:38.370 ***** 2026-02-19 06:10:26.232044 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:10:26.232062 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 06:10:26.232079 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:10:26.232095 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:10:26.232111 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:10:26.232128 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:10:26.232143 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:10:26.232161 | orchestrator | 2026-02-19 06:10:26.232178 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:10:26.232193 | orchestrator | Thursday 19 February 2026 06:09:53 +0000 (0:00:01.850) 0:26:40.220 ***** 2026-02-19 06:10:26.232245 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:10:26.232262 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 06:10:26.232278 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:10:26.232294 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:10:26.232309 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:10:26.232325 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:10:26.232339 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:10:26.232354 | orchestrator | 2026-02-19 06:10:26.232371 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:10:26.232387 | orchestrator | Thursday 19 February 2026 06:09:55 +0000 (0:00:01.906) 0:26:42.127 ***** 2026-02-19 06:10:26.232405 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-19 06:10:26.232422 | orchestrator | 2026-02-19 06:10:26.232438 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:10:26.232555 | orchestrator | Thursday 19 February 2026 06:09:56 +0000 (0:00:01.060) 0:26:43.188 ***** 2026-02-19 06:10:26.232579 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-19 06:10:26.232594 | orchestrator | 2026-02-19 06:10:26.232610 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:10:26.232629 | orchestrator | Thursday 19 February 2026 06:09:58 +0000 (0:00:01.110) 0:26:44.299 ***** 2026-02-19 06:10:26.232646 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:10:26.232662 | orchestrator | 2026-02-19 06:10:26.232678 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:10:26.232695 | orchestrator | Thursday 19 February 2026 06:09:59 +0000 (0:00:01.845) 0:26:46.144 ***** 2026-02-19 06:10:26.232711 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.232727 | orchestrator | 2026-02-19 06:10:26.232742 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:10:26.232758 | orchestrator | Thursday 19 February 2026 06:10:01 +0000 (0:00:01.107) 0:26:47.251 ***** 2026-02-19 06:10:26.232773 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.232788 | orchestrator | 2026-02-19 06:10:26.232802 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:10:26.232818 | orchestrator | Thursday 19 February 2026 06:10:02 +0000 (0:00:01.095) 0:26:48.347 ***** 2026-02-19 06:10:26.232833 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.232849 | orchestrator | 2026-02-19 06:10:26.232865 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:10:26.232879 | orchestrator | Thursday 19 February 2026 06:10:03 +0000 (0:00:01.131) 0:26:49.479 ***** 2026-02-19 06:10:26.232895 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:10:26.232909 | orchestrator | 2026-02-19 06:10:26.232925 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:10:26.232940 | orchestrator | Thursday 19 February 2026 06:10:04 +0000 (0:00:01.496) 0:26:50.976 ***** 2026-02-19 06:10:26.232956 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.232971 | orchestrator | 2026-02-19 06:10:26.232986 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:10:26.233002 | orchestrator | Thursday 19 February 2026 06:10:05 +0000 (0:00:01.147) 0:26:52.123 ***** 2026-02-19 06:10:26.233017 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.233033 | orchestrator | 2026-02-19 06:10:26.233049 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:10:26.233066 | orchestrator | Thursday 19 February 2026 06:10:07 +0000 (0:00:01.114) 0:26:53.238 ***** 2026-02-19 06:10:26.233081 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:10:26.233115 | orchestrator | 2026-02-19 06:10:26.233162 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:10:26.233180 | orchestrator | Thursday 19 February 2026 06:10:08 +0000 (0:00:01.541) 0:26:54.780 ***** 2026-02-19 06:10:26.233197 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:10:26.233214 | orchestrator | 2026-02-19 06:10:26.233232 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:10:26.233279 | orchestrator | Thursday 19 February 2026 06:10:10 +0000 (0:00:01.532) 0:26:56.312 ***** 2026-02-19 06:10:26.233296 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.233312 | orchestrator | 2026-02-19 06:10:26.233328 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:10:26.233343 | orchestrator | Thursday 19 February 2026 06:10:10 +0000 (0:00:00.763) 0:26:57.075 ***** 2026-02-19 06:10:26.233360 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:10:26.233377 | orchestrator | 2026-02-19 06:10:26.233395 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:10:26.233413 | orchestrator | Thursday 19 February 2026 06:10:11 +0000 (0:00:00.783) 0:26:57.858 ***** 2026-02-19 06:10:26.233430 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.233447 | orchestrator | 2026-02-19 06:10:26.233464 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:10:26.233510 | orchestrator | Thursday 19 February 2026 06:10:12 +0000 (0:00:00.789) 0:26:58.647 ***** 2026-02-19 06:10:26.233526 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.233542 | orchestrator | 2026-02-19 06:10:26.233557 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:10:26.233572 | orchestrator | Thursday 19 February 2026 06:10:13 +0000 (0:00:00.768) 0:26:59.416 ***** 2026-02-19 06:10:26.233587 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.233601 | orchestrator | 2026-02-19 06:10:26.233620 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:10:26.233637 | orchestrator | Thursday 19 February 2026 06:10:13 +0000 (0:00:00.759) 0:27:00.176 ***** 2026-02-19 06:10:26.233653 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.233669 | orchestrator | 2026-02-19 06:10:26.233684 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:10:26.233701 | orchestrator | Thursday 19 February 2026 06:10:14 +0000 (0:00:00.749) 0:27:00.925 ***** 2026-02-19 06:10:26.233717 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.233733 | orchestrator | 2026-02-19 06:10:26.233749 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:10:26.233766 | orchestrator | Thursday 19 February 2026 06:10:15 +0000 (0:00:00.761) 0:27:01.686 ***** 2026-02-19 06:10:26.233781 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:10:26.233798 | orchestrator | 2026-02-19 06:10:26.233814 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:10:26.233831 | orchestrator | Thursday 19 February 2026 06:10:16 +0000 (0:00:00.778) 0:27:02.465 ***** 2026-02-19 06:10:26.233847 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:10:26.233863 | orchestrator | 2026-02-19 06:10:26.233880 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:10:26.233894 | orchestrator | Thursday 19 February 2026 06:10:17 +0000 (0:00:00.790) 0:27:03.256 ***** 2026-02-19 06:10:26.233908 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:10:26.233921 | orchestrator | 2026-02-19 06:10:26.233929 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:10:26.233937 | orchestrator | Thursday 19 February 2026 06:10:17 +0000 (0:00:00.773) 0:27:04.030 ***** 2026-02-19 06:10:26.233955 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.233963 | orchestrator | 2026-02-19 06:10:26.233971 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:10:26.233979 | orchestrator | Thursday 19 February 2026 06:10:18 +0000 (0:00:00.761) 0:27:04.791 ***** 2026-02-19 06:10:26.233987 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234007 | orchestrator | 2026-02-19 06:10:26.234068 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:10:26.234078 | orchestrator | Thursday 19 February 2026 06:10:19 +0000 (0:00:00.759) 0:27:05.551 ***** 2026-02-19 06:10:26.234086 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234094 | orchestrator | 2026-02-19 06:10:26.234102 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:10:26.234110 | orchestrator | Thursday 19 February 2026 06:10:20 +0000 (0:00:00.745) 0:27:06.297 ***** 2026-02-19 06:10:26.234118 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234125 | orchestrator | 2026-02-19 06:10:26.234135 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:10:26.234149 | orchestrator | Thursday 19 February 2026 06:10:20 +0000 (0:00:00.781) 0:27:07.078 ***** 2026-02-19 06:10:26.234162 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234175 | orchestrator | 2026-02-19 06:10:26.234188 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:10:26.234200 | orchestrator | Thursday 19 February 2026 06:10:21 +0000 (0:00:00.776) 0:27:07.855 ***** 2026-02-19 06:10:26.234214 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234227 | orchestrator | 2026-02-19 06:10:26.234240 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:10:26.234252 | orchestrator | Thursday 19 February 2026 06:10:22 +0000 (0:00:00.773) 0:27:08.629 ***** 2026-02-19 06:10:26.234266 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234280 | orchestrator | 2026-02-19 06:10:26.234293 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:10:26.234308 | orchestrator | Thursday 19 February 2026 06:10:23 +0000 (0:00:00.769) 0:27:09.399 ***** 2026-02-19 06:10:26.234317 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234324 | orchestrator | 2026-02-19 06:10:26.234332 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:10:26.234340 | orchestrator | Thursday 19 February 2026 06:10:23 +0000 (0:00:00.749) 0:27:10.148 ***** 2026-02-19 06:10:26.234348 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234355 | orchestrator | 2026-02-19 06:10:26.234363 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:10:26.234371 | orchestrator | Thursday 19 February 2026 06:10:24 +0000 (0:00:00.747) 0:27:10.895 ***** 2026-02-19 06:10:26.234379 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234386 | orchestrator | 2026-02-19 06:10:26.234394 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:10:26.234402 | orchestrator | Thursday 19 February 2026 06:10:25 +0000 (0:00:00.769) 0:27:11.665 ***** 2026-02-19 06:10:26.234410 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:10:26.234416 | orchestrator | 2026-02-19 06:10:26.234435 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:11:11.299076 | orchestrator | Thursday 19 February 2026 06:10:26 +0000 (0:00:00.779) 0:27:12.444 ***** 2026-02-19 06:11:11.299212 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.299232 | orchestrator | 2026-02-19 06:11:11.299245 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:11:11.299257 | orchestrator | Thursday 19 February 2026 06:10:26 +0000 (0:00:00.766) 0:27:13.211 ***** 2026-02-19 06:11:11.299268 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:11.299280 | orchestrator | 2026-02-19 06:11:11.299292 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:11:11.299303 | orchestrator | Thursday 19 February 2026 06:10:28 +0000 (0:00:01.656) 0:27:14.868 ***** 2026-02-19 06:11:11.299314 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:11.299325 | orchestrator | 2026-02-19 06:11:11.299336 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:11:11.299347 | orchestrator | Thursday 19 February 2026 06:10:30 +0000 (0:00:02.238) 0:27:17.107 ***** 2026-02-19 06:11:11.299358 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-19 06:11:11.299396 | orchestrator | 2026-02-19 06:11:11.299408 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:11:11.299418 | orchestrator | Thursday 19 February 2026 06:10:31 +0000 (0:00:01.108) 0:27:18.215 ***** 2026-02-19 06:11:11.299429 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.299440 | orchestrator | 2026-02-19 06:11:11.299451 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:11:11.299461 | orchestrator | Thursday 19 February 2026 06:10:33 +0000 (0:00:01.112) 0:27:19.328 ***** 2026-02-19 06:11:11.299472 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.299483 | orchestrator | 2026-02-19 06:11:11.299493 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:11:11.299564 | orchestrator | Thursday 19 February 2026 06:10:34 +0000 (0:00:01.106) 0:27:20.434 ***** 2026-02-19 06:11:11.299575 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:11:11.299586 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:11:11.299598 | orchestrator | 2026-02-19 06:11:11.299610 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:11:11.299622 | orchestrator | Thursday 19 February 2026 06:10:36 +0000 (0:00:01.899) 0:27:22.334 ***** 2026-02-19 06:11:11.299635 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:11.299647 | orchestrator | 2026-02-19 06:11:11.299660 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:11:11.299676 | orchestrator | Thursday 19 February 2026 06:10:37 +0000 (0:00:01.482) 0:27:23.816 ***** 2026-02-19 06:11:11.299695 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.299716 | orchestrator | 2026-02-19 06:11:11.299755 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:11:11.299775 | orchestrator | Thursday 19 February 2026 06:10:38 +0000 (0:00:01.147) 0:27:24.963 ***** 2026-02-19 06:11:11.299794 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.299814 | orchestrator | 2026-02-19 06:11:11.299835 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:11:11.299856 | orchestrator | Thursday 19 February 2026 06:10:39 +0000 (0:00:00.797) 0:27:25.761 ***** 2026-02-19 06:11:11.299877 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.299898 | orchestrator | 2026-02-19 06:11:11.299919 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:11:11.299938 | orchestrator | Thursday 19 February 2026 06:10:40 +0000 (0:00:00.779) 0:27:26.540 ***** 2026-02-19 06:11:11.299959 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-19 06:11:11.299978 | orchestrator | 2026-02-19 06:11:11.299996 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:11:11.300016 | orchestrator | Thursday 19 February 2026 06:10:41 +0000 (0:00:01.137) 0:27:27.678 ***** 2026-02-19 06:11:11.300032 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:11.300052 | orchestrator | 2026-02-19 06:11:11.300071 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:11:11.300092 | orchestrator | Thursday 19 February 2026 06:10:43 +0000 (0:00:01.733) 0:27:29.411 ***** 2026-02-19 06:11:11.300112 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:11:11.300131 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:11:11.300146 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:11:11.300157 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300167 | orchestrator | 2026-02-19 06:11:11.300178 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:11:11.300189 | orchestrator | Thursday 19 February 2026 06:10:44 +0000 (0:00:01.118) 0:27:30.530 ***** 2026-02-19 06:11:11.300200 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300223 | orchestrator | 2026-02-19 06:11:11.300234 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:11:11.300245 | orchestrator | Thursday 19 February 2026 06:10:45 +0000 (0:00:01.124) 0:27:31.654 ***** 2026-02-19 06:11:11.300256 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300266 | orchestrator | 2026-02-19 06:11:11.300277 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:11:11.300288 | orchestrator | Thursday 19 February 2026 06:10:46 +0000 (0:00:01.218) 0:27:32.873 ***** 2026-02-19 06:11:11.300299 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300309 | orchestrator | 2026-02-19 06:11:11.300320 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:11:11.300331 | orchestrator | Thursday 19 February 2026 06:10:47 +0000 (0:00:01.152) 0:27:34.026 ***** 2026-02-19 06:11:11.300342 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300352 | orchestrator | 2026-02-19 06:11:11.300386 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:11:11.300397 | orchestrator | Thursday 19 February 2026 06:10:48 +0000 (0:00:01.144) 0:27:35.171 ***** 2026-02-19 06:11:11.300408 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300420 | orchestrator | 2026-02-19 06:11:11.300430 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:11:11.300441 | orchestrator | Thursday 19 February 2026 06:10:49 +0000 (0:00:00.771) 0:27:35.943 ***** 2026-02-19 06:11:11.300452 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:11.300462 | orchestrator | 2026-02-19 06:11:11.300473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:11:11.300484 | orchestrator | Thursday 19 February 2026 06:10:51 +0000 (0:00:02.201) 0:27:38.144 ***** 2026-02-19 06:11:11.300523 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:11.300542 | orchestrator | 2026-02-19 06:11:11.300553 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:11:11.300564 | orchestrator | Thursday 19 February 2026 06:10:52 +0000 (0:00:00.767) 0:27:38.912 ***** 2026-02-19 06:11:11.300575 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-19 06:11:11.300586 | orchestrator | 2026-02-19 06:11:11.300597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:11:11.300608 | orchestrator | Thursday 19 February 2026 06:10:53 +0000 (0:00:01.133) 0:27:40.046 ***** 2026-02-19 06:11:11.300618 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300629 | orchestrator | 2026-02-19 06:11:11.300640 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:11:11.300650 | orchestrator | Thursday 19 February 2026 06:10:54 +0000 (0:00:01.142) 0:27:41.188 ***** 2026-02-19 06:11:11.300661 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300671 | orchestrator | 2026-02-19 06:11:11.300682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:11:11.300693 | orchestrator | Thursday 19 February 2026 06:10:56 +0000 (0:00:01.185) 0:27:42.374 ***** 2026-02-19 06:11:11.300703 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300714 | orchestrator | 2026-02-19 06:11:11.300725 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:11:11.300735 | orchestrator | Thursday 19 February 2026 06:10:57 +0000 (0:00:01.111) 0:27:43.485 ***** 2026-02-19 06:11:11.300746 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300757 | orchestrator | 2026-02-19 06:11:11.300767 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:11:11.300778 | orchestrator | Thursday 19 February 2026 06:10:58 +0000 (0:00:01.132) 0:27:44.617 ***** 2026-02-19 06:11:11.300789 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300799 | orchestrator | 2026-02-19 06:11:11.300812 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:11:11.300831 | orchestrator | Thursday 19 February 2026 06:10:59 +0000 (0:00:01.114) 0:27:45.732 ***** 2026-02-19 06:11:11.300871 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300890 | orchestrator | 2026-02-19 06:11:11.300908 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:11:11.300928 | orchestrator | Thursday 19 February 2026 06:11:00 +0000 (0:00:01.109) 0:27:46.841 ***** 2026-02-19 06:11:11.300947 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.300966 | orchestrator | 2026-02-19 06:11:11.300984 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:11:11.301004 | orchestrator | Thursday 19 February 2026 06:11:01 +0000 (0:00:01.121) 0:27:47.963 ***** 2026-02-19 06:11:11.301022 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:11.301039 | orchestrator | 2026-02-19 06:11:11.301050 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:11:11.301061 | orchestrator | Thursday 19 February 2026 06:11:02 +0000 (0:00:01.134) 0:27:49.098 ***** 2026-02-19 06:11:11.301071 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:11.301082 | orchestrator | 2026-02-19 06:11:11.301093 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:11:11.301104 | orchestrator | Thursday 19 February 2026 06:11:03 +0000 (0:00:00.818) 0:27:49.917 ***** 2026-02-19 06:11:11.301114 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-19 06:11:11.301125 | orchestrator | 2026-02-19 06:11:11.301136 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:11:11.301147 | orchestrator | Thursday 19 February 2026 06:11:04 +0000 (0:00:01.118) 0:27:51.036 ***** 2026-02-19 06:11:11.301158 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-19 06:11:11.301169 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-19 06:11:11.301180 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-19 06:11:11.301190 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-19 06:11:11.301201 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-19 06:11:11.301212 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-19 06:11:11.301223 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-19 06:11:11.301233 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:11:11.301244 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:11:11.301255 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:11:11.301265 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:11:11.301276 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:11:11.301287 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:11:11.301298 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:11:11.301308 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-19 06:11:11.301319 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-19 06:11:11.301330 | orchestrator | 2026-02-19 06:11:11.301350 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:11:51.529437 | orchestrator | Thursday 19 February 2026 06:11:11 +0000 (0:00:06.464) 0:27:57.501 ***** 2026-02-19 06:11:51.529609 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.529635 | orchestrator | 2026-02-19 06:11:51.529654 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:11:51.529669 | orchestrator | Thursday 19 February 2026 06:11:12 +0000 (0:00:00.788) 0:27:58.289 ***** 2026-02-19 06:11:51.529684 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.529699 | orchestrator | 2026-02-19 06:11:51.529715 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:11:51.529731 | orchestrator | Thursday 19 February 2026 06:11:12 +0000 (0:00:00.769) 0:27:59.059 ***** 2026-02-19 06:11:51.529746 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.529792 | orchestrator | 2026-02-19 06:11:51.529808 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:11:51.529824 | orchestrator | Thursday 19 February 2026 06:11:13 +0000 (0:00:00.784) 0:27:59.843 ***** 2026-02-19 06:11:51.529840 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.529855 | orchestrator | 2026-02-19 06:11:51.529870 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:11:51.529886 | orchestrator | Thursday 19 February 2026 06:11:14 +0000 (0:00:00.793) 0:28:00.637 ***** 2026-02-19 06:11:51.529902 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.529917 | orchestrator | 2026-02-19 06:11:51.529932 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:11:51.529947 | orchestrator | Thursday 19 February 2026 06:11:15 +0000 (0:00:00.752) 0:28:01.390 ***** 2026-02-19 06:11:51.529963 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.529978 | orchestrator | 2026-02-19 06:11:51.529992 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:11:51.530007 | orchestrator | Thursday 19 February 2026 06:11:15 +0000 (0:00:00.765) 0:28:02.155 ***** 2026-02-19 06:11:51.530086 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530102 | orchestrator | 2026-02-19 06:11:51.530119 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:11:51.530134 | orchestrator | Thursday 19 February 2026 06:11:16 +0000 (0:00:00.840) 0:28:02.996 ***** 2026-02-19 06:11:51.530149 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530164 | orchestrator | 2026-02-19 06:11:51.530179 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:11:51.530194 | orchestrator | Thursday 19 February 2026 06:11:17 +0000 (0:00:00.770) 0:28:03.767 ***** 2026-02-19 06:11:51.530209 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530224 | orchestrator | 2026-02-19 06:11:51.530239 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:11:51.530270 | orchestrator | Thursday 19 February 2026 06:11:18 +0000 (0:00:00.781) 0:28:04.548 ***** 2026-02-19 06:11:51.530286 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530300 | orchestrator | 2026-02-19 06:11:51.530314 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:11:51.530327 | orchestrator | Thursday 19 February 2026 06:11:19 +0000 (0:00:00.798) 0:28:05.347 ***** 2026-02-19 06:11:51.530341 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530354 | orchestrator | 2026-02-19 06:11:51.530367 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:11:51.530381 | orchestrator | Thursday 19 February 2026 06:11:19 +0000 (0:00:00.762) 0:28:06.109 ***** 2026-02-19 06:11:51.530395 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530409 | orchestrator | 2026-02-19 06:11:51.530422 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:11:51.530438 | orchestrator | Thursday 19 February 2026 06:11:20 +0000 (0:00:00.783) 0:28:06.892 ***** 2026-02-19 06:11:51.530452 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530467 | orchestrator | 2026-02-19 06:11:51.530482 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:11:51.530497 | orchestrator | Thursday 19 February 2026 06:11:21 +0000 (0:00:00.869) 0:28:07.762 ***** 2026-02-19 06:11:51.530512 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530550 | orchestrator | 2026-02-19 06:11:51.530565 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:11:51.530580 | orchestrator | Thursday 19 February 2026 06:11:22 +0000 (0:00:00.767) 0:28:08.530 ***** 2026-02-19 06:11:51.530594 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530609 | orchestrator | 2026-02-19 06:11:51.530624 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:11:51.530639 | orchestrator | Thursday 19 February 2026 06:11:23 +0000 (0:00:00.914) 0:28:09.444 ***** 2026-02-19 06:11:51.530668 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530683 | orchestrator | 2026-02-19 06:11:51.530697 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:11:51.530712 | orchestrator | Thursday 19 February 2026 06:11:23 +0000 (0:00:00.758) 0:28:10.203 ***** 2026-02-19 06:11:51.530726 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530741 | orchestrator | 2026-02-19 06:11:51.530757 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:11:51.530773 | orchestrator | Thursday 19 February 2026 06:11:24 +0000 (0:00:00.753) 0:28:10.956 ***** 2026-02-19 06:11:51.530787 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530802 | orchestrator | 2026-02-19 06:11:51.530816 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:11:51.530831 | orchestrator | Thursday 19 February 2026 06:11:25 +0000 (0:00:00.757) 0:28:11.714 ***** 2026-02-19 06:11:51.530845 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530859 | orchestrator | 2026-02-19 06:11:51.530873 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:11:51.530888 | orchestrator | Thursday 19 February 2026 06:11:26 +0000 (0:00:00.787) 0:28:12.502 ***** 2026-02-19 06:11:51.530902 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530917 | orchestrator | 2026-02-19 06:11:51.530954 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:11:51.530970 | orchestrator | Thursday 19 February 2026 06:11:27 +0000 (0:00:00.780) 0:28:13.282 ***** 2026-02-19 06:11:51.530984 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.530999 | orchestrator | 2026-02-19 06:11:51.531014 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:11:51.531028 | orchestrator | Thursday 19 February 2026 06:11:27 +0000 (0:00:00.772) 0:28:14.055 ***** 2026-02-19 06:11:51.531043 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 06:11:51.531057 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 06:11:51.531072 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 06:11:51.531087 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.531102 | orchestrator | 2026-02-19 06:11:51.531117 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:11:51.531131 | orchestrator | Thursday 19 February 2026 06:11:29 +0000 (0:00:01.316) 0:28:15.372 ***** 2026-02-19 06:11:51.531146 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 06:11:51.531161 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 06:11:51.531176 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 06:11:51.531190 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.531205 | orchestrator | 2026-02-19 06:11:51.531219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:11:51.531233 | orchestrator | Thursday 19 February 2026 06:11:30 +0000 (0:00:01.334) 0:28:16.707 ***** 2026-02-19 06:11:51.531248 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-19 06:11:51.531263 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-19 06:11:51.531278 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-19 06:11:51.531292 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.531307 | orchestrator | 2026-02-19 06:11:51.531322 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:11:51.531337 | orchestrator | Thursday 19 February 2026 06:11:31 +0000 (0:00:01.010) 0:28:17.717 ***** 2026-02-19 06:11:51.531351 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.531366 | orchestrator | 2026-02-19 06:11:51.531381 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:11:51.531395 | orchestrator | Thursday 19 February 2026 06:11:32 +0000 (0:00:00.771) 0:28:18.489 ***** 2026-02-19 06:11:51.531419 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-19 06:11:51.531434 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.531448 | orchestrator | 2026-02-19 06:11:51.531461 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:11:51.531485 | orchestrator | Thursday 19 February 2026 06:11:33 +0000 (0:00:00.880) 0:28:19.370 ***** 2026-02-19 06:11:51.531500 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:51.531514 | orchestrator | 2026-02-19 06:11:51.531564 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:11:51.531578 | orchestrator | Thursday 19 February 2026 06:11:34 +0000 (0:00:01.403) 0:28:20.773 ***** 2026-02-19 06:11:51.531592 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:11:51.531606 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-19 06:11:51.531620 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:11:51.531633 | orchestrator | 2026-02-19 06:11:51.531647 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-19 06:11:51.531660 | orchestrator | Thursday 19 February 2026 06:11:35 +0000 (0:00:01.269) 0:28:22.042 ***** 2026-02-19 06:11:51.531673 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-19 06:11:51.531687 | orchestrator | 2026-02-19 06:11:51.531701 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-19 06:11:51.531714 | orchestrator | Thursday 19 February 2026 06:11:36 +0000 (0:00:01.102) 0:28:23.145 ***** 2026-02-19 06:11:51.531728 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:51.531741 | orchestrator | 2026-02-19 06:11:51.531754 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-19 06:11:51.531769 | orchestrator | Thursday 19 February 2026 06:11:38 +0000 (0:00:01.459) 0:28:24.605 ***** 2026-02-19 06:11:51.531782 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:11:51.531795 | orchestrator | 2026-02-19 06:11:51.531809 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-19 06:11:51.531823 | orchestrator | Thursday 19 February 2026 06:11:39 +0000 (0:00:01.117) 0:28:25.722 ***** 2026-02-19 06:11:51.531836 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:11:51.531850 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:11:51.531863 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:11:51.531877 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-19 06:11:51.531891 | orchestrator | 2026-02-19 06:11:51.531905 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-19 06:11:51.531918 | orchestrator | Thursday 19 February 2026 06:11:47 +0000 (0:00:07.782) 0:28:33.505 ***** 2026-02-19 06:11:51.531930 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:11:51.531944 | orchestrator | 2026-02-19 06:11:51.531956 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-19 06:11:51.531970 | orchestrator | Thursday 19 February 2026 06:11:48 +0000 (0:00:01.159) 0:28:34.664 ***** 2026-02-19 06:11:51.531982 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-19 06:11:51.531996 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-19 06:11:51.532009 | orchestrator | 2026-02-19 06:11:51.532034 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:12:36.771139 | orchestrator | Thursday 19 February 2026 06:11:51 +0000 (0:00:03.073) 0:28:37.738 ***** 2026-02-19 06:12:36.771255 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-19 06:12:36.771262 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-19 06:12:36.771269 | orchestrator | 2026-02-19 06:12:36.771275 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-19 06:12:36.771281 | orchestrator | Thursday 19 February 2026 06:11:53 +0000 (0:00:01.803) 0:28:39.541 ***** 2026-02-19 06:12:36.771305 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:12:36.771310 | orchestrator | 2026-02-19 06:12:36.771315 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-19 06:12:36.771320 | orchestrator | Thursday 19 February 2026 06:11:54 +0000 (0:00:01.260) 0:28:40.802 ***** 2026-02-19 06:12:36.771325 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:12:36.771330 | orchestrator | 2026-02-19 06:12:36.771334 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-19 06:12:36.771339 | orchestrator | Thursday 19 February 2026 06:11:55 +0000 (0:00:00.595) 0:28:41.398 ***** 2026-02-19 06:12:36.771343 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:12:36.771348 | orchestrator | 2026-02-19 06:12:36.771352 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-19 06:12:36.771357 | orchestrator | Thursday 19 February 2026 06:11:55 +0000 (0:00:00.596) 0:28:41.994 ***** 2026-02-19 06:12:36.771361 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-19 06:12:36.771367 | orchestrator | 2026-02-19 06:12:36.771371 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-19 06:12:36.771376 | orchestrator | Thursday 19 February 2026 06:11:56 +0000 (0:00:00.882) 0:28:42.876 ***** 2026-02-19 06:12:36.771380 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:12:36.771385 | orchestrator | 2026-02-19 06:12:36.771390 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-19 06:12:36.771394 | orchestrator | Thursday 19 February 2026 06:11:57 +0000 (0:00:01.093) 0:28:43.970 ***** 2026-02-19 06:12:36.771399 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:12:36.771403 | orchestrator | 2026-02-19 06:12:36.771408 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-19 06:12:36.771412 | orchestrator | Thursday 19 February 2026 06:11:58 +0000 (0:00:00.991) 0:28:44.962 ***** 2026-02-19 06:12:36.771417 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-19 06:12:36.771421 | orchestrator | 2026-02-19 06:12:36.771426 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-19 06:12:36.771431 | orchestrator | Thursday 19 February 2026 06:11:59 +0000 (0:00:01.109) 0:28:46.072 ***** 2026-02-19 06:12:36.771436 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:12:36.771440 | orchestrator | 2026-02-19 06:12:36.771458 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-19 06:12:36.771463 | orchestrator | Thursday 19 February 2026 06:12:01 +0000 (0:00:01.979) 0:28:48.051 ***** 2026-02-19 06:12:36.771468 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:12:36.771472 | orchestrator | 2026-02-19 06:12:36.771477 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-19 06:12:36.771481 | orchestrator | Thursday 19 February 2026 06:12:03 +0000 (0:00:01.918) 0:28:49.970 ***** 2026-02-19 06:12:36.771486 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:12:36.771490 | orchestrator | 2026-02-19 06:12:36.771495 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-19 06:12:36.771499 | orchestrator | Thursday 19 February 2026 06:12:06 +0000 (0:00:02.766) 0:28:52.736 ***** 2026-02-19 06:12:36.771504 | orchestrator | changed: [testbed-node-1] 2026-02-19 06:12:36.771509 | orchestrator | 2026-02-19 06:12:36.771513 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-19 06:12:36.771518 | orchestrator | Thursday 19 February 2026 06:12:10 +0000 (0:00:03.608) 0:28:56.345 ***** 2026-02-19 06:12:36.771522 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:12:36.771527 | orchestrator | 2026-02-19 06:12:36.771531 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-19 06:12:36.771536 | orchestrator | 2026-02-19 06:12:36.771594 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-19 06:12:36.771600 | orchestrator | Thursday 19 February 2026 06:12:11 +0000 (0:00:00.991) 0:28:57.337 ***** 2026-02-19 06:12:36.771605 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:12:36.771616 | orchestrator | 2026-02-19 06:12:36.771624 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-19 06:12:36.771632 | orchestrator | Thursday 19 February 2026 06:12:13 +0000 (0:00:02.565) 0:28:59.902 ***** 2026-02-19 06:12:36.771639 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:12:36.771646 | orchestrator | 2026-02-19 06:12:36.771653 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:12:36.771660 | orchestrator | Thursday 19 February 2026 06:12:15 +0000 (0:00:02.160) 0:29:02.063 ***** 2026-02-19 06:12:36.771669 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-19 06:12:36.771677 | orchestrator | 2026-02-19 06:12:36.771684 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:12:36.771692 | orchestrator | Thursday 19 February 2026 06:12:16 +0000 (0:00:01.127) 0:29:03.190 ***** 2026-02-19 06:12:36.771700 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:36.771709 | orchestrator | 2026-02-19 06:12:36.771718 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:12:36.771725 | orchestrator | Thursday 19 February 2026 06:12:18 +0000 (0:00:01.480) 0:29:04.671 ***** 2026-02-19 06:12:36.771730 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:36.771736 | orchestrator | 2026-02-19 06:12:36.771741 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:12:36.771746 | orchestrator | Thursday 19 February 2026 06:12:19 +0000 (0:00:01.114) 0:29:05.786 ***** 2026-02-19 06:12:36.771751 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:36.771756 | orchestrator | 2026-02-19 06:12:36.771762 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:12:36.771781 | orchestrator | Thursday 19 February 2026 06:12:21 +0000 (0:00:01.449) 0:29:07.236 ***** 2026-02-19 06:12:36.771787 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:36.771792 | orchestrator | 2026-02-19 06:12:36.771797 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:12:36.771803 | orchestrator | Thursday 19 February 2026 06:12:22 +0000 (0:00:01.139) 0:29:08.376 ***** 2026-02-19 06:12:36.771808 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:36.771813 | orchestrator | 2026-02-19 06:12:36.771818 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:12:36.771823 | orchestrator | Thursday 19 February 2026 06:12:23 +0000 (0:00:01.127) 0:29:09.504 ***** 2026-02-19 06:12:36.771828 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:36.771834 | orchestrator | 2026-02-19 06:12:36.771839 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:12:36.771844 | orchestrator | Thursday 19 February 2026 06:12:24 +0000 (0:00:01.111) 0:29:10.615 ***** 2026-02-19 06:12:36.771849 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:36.771855 | orchestrator | 2026-02-19 06:12:36.771860 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:12:36.771865 | orchestrator | Thursday 19 February 2026 06:12:25 +0000 (0:00:01.124) 0:29:11.740 ***** 2026-02-19 06:12:36.771870 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:36.771875 | orchestrator | 2026-02-19 06:12:36.771880 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:12:36.771885 | orchestrator | Thursday 19 February 2026 06:12:26 +0000 (0:00:01.093) 0:29:12.833 ***** 2026-02-19 06:12:36.771890 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:12:36.771896 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:12:36.771901 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 06:12:36.771906 | orchestrator | 2026-02-19 06:12:36.771912 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:12:36.771917 | orchestrator | Thursday 19 February 2026 06:12:28 +0000 (0:00:01.623) 0:29:14.456 ***** 2026-02-19 06:12:36.771922 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:36.771932 | orchestrator | 2026-02-19 06:12:36.771937 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:12:36.771942 | orchestrator | Thursday 19 February 2026 06:12:29 +0000 (0:00:01.231) 0:29:15.687 ***** 2026-02-19 06:12:36.771946 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:12:36.771951 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:12:36.771955 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 06:12:36.771960 | orchestrator | 2026-02-19 06:12:36.771969 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:12:36.771974 | orchestrator | Thursday 19 February 2026 06:12:32 +0000 (0:00:02.911) 0:29:18.599 ***** 2026-02-19 06:12:36.771979 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 06:12:36.771983 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 06:12:36.771988 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 06:12:36.771992 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:36.771997 | orchestrator | 2026-02-19 06:12:36.772001 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:12:36.772006 | orchestrator | Thursday 19 February 2026 06:12:33 +0000 (0:00:01.362) 0:29:19.961 ***** 2026-02-19 06:12:36.772013 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:12:36.772021 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:12:36.772026 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:12:36.772030 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:36.772035 | orchestrator | 2026-02-19 06:12:36.772040 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:12:36.772044 | orchestrator | Thursday 19 February 2026 06:12:35 +0000 (0:00:01.864) 0:29:21.826 ***** 2026-02-19 06:12:36.772051 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:36.772063 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:55.061129 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:55.061305 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.061330 | orchestrator | 2026-02-19 06:12:55.061350 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:12:55.061419 | orchestrator | Thursday 19 February 2026 06:12:36 +0000 (0:00:01.156) 0:29:22.983 ***** 2026-02-19 06:12:55.061439 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:12:30.023675', 'end': '2026-02-19 06:12:30.079251', 'delta': '0:00:00.055576', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:12:55.061478 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:12:30.621729', 'end': '2026-02-19 06:12:30.663906', 'delta': '0:00:00.042177', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:12:55.061496 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:12:31.177609', 'end': '2026-02-19 06:12:31.218662', 'delta': '0:00:00.041053', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:12:55.061512 | orchestrator | 2026-02-19 06:12:55.061530 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:12:55.061576 | orchestrator | Thursday 19 February 2026 06:12:37 +0000 (0:00:01.168) 0:29:24.152 ***** 2026-02-19 06:12:55.061595 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:55.061614 | orchestrator | 2026-02-19 06:12:55.061629 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:12:55.061645 | orchestrator | Thursday 19 February 2026 06:12:39 +0000 (0:00:01.257) 0:29:25.409 ***** 2026-02-19 06:12:55.061661 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.061680 | orchestrator | 2026-02-19 06:12:55.061696 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:12:55.061714 | orchestrator | Thursday 19 February 2026 06:12:40 +0000 (0:00:01.521) 0:29:26.930 ***** 2026-02-19 06:12:55.061732 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:55.061751 | orchestrator | 2026-02-19 06:12:55.061768 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:12:55.061788 | orchestrator | Thursday 19 February 2026 06:12:41 +0000 (0:00:01.143) 0:29:28.074 ***** 2026-02-19 06:12:55.061805 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:12:55.061822 | orchestrator | 2026-02-19 06:12:55.061838 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:12:55.061855 | orchestrator | Thursday 19 February 2026 06:12:43 +0000 (0:00:02.008) 0:29:30.083 ***** 2026-02-19 06:12:55.061872 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:12:55.061888 | orchestrator | 2026-02-19 06:12:55.061919 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:12:55.061935 | orchestrator | Thursday 19 February 2026 06:12:44 +0000 (0:00:01.093) 0:29:31.177 ***** 2026-02-19 06:12:55.061977 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.061995 | orchestrator | 2026-02-19 06:12:55.062011 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:12:55.062096 | orchestrator | Thursday 19 February 2026 06:12:45 +0000 (0:00:00.901) 0:29:32.079 ***** 2026-02-19 06:12:55.062114 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.062132 | orchestrator | 2026-02-19 06:12:55.062148 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:12:55.062166 | orchestrator | Thursday 19 February 2026 06:12:46 +0000 (0:00:01.015) 0:29:33.095 ***** 2026-02-19 06:12:55.062185 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.062201 | orchestrator | 2026-02-19 06:12:55.062218 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:12:55.062234 | orchestrator | Thursday 19 February 2026 06:12:47 +0000 (0:00:00.896) 0:29:33.991 ***** 2026-02-19 06:12:55.062250 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.062266 | orchestrator | 2026-02-19 06:12:55.062286 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:12:55.062303 | orchestrator | Thursday 19 February 2026 06:12:48 +0000 (0:00:00.890) 0:29:34.882 ***** 2026-02-19 06:12:55.062320 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.062337 | orchestrator | 2026-02-19 06:12:55.062355 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:12:55.062371 | orchestrator | Thursday 19 February 2026 06:12:49 +0000 (0:00:00.902) 0:29:35.784 ***** 2026-02-19 06:12:55.062387 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.062404 | orchestrator | 2026-02-19 06:12:55.062422 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:12:55.062438 | orchestrator | Thursday 19 February 2026 06:12:50 +0000 (0:00:00.918) 0:29:36.703 ***** 2026-02-19 06:12:55.062455 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.062472 | orchestrator | 2026-02-19 06:12:55.062488 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:12:55.062506 | orchestrator | Thursday 19 February 2026 06:12:51 +0000 (0:00:01.127) 0:29:37.831 ***** 2026-02-19 06:12:55.062523 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.062540 | orchestrator | 2026-02-19 06:12:55.062583 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:12:55.062600 | orchestrator | Thursday 19 February 2026 06:12:52 +0000 (0:00:01.077) 0:29:38.908 ***** 2026-02-19 06:12:55.062616 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:55.062631 | orchestrator | 2026-02-19 06:12:55.062647 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:12:55.062662 | orchestrator | Thursday 19 February 2026 06:12:53 +0000 (0:00:01.079) 0:29:39.987 ***** 2026-02-19 06:12:55.062690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:12:55.062710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:12:55.062726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:12:55.062754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:12:55.062774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:12:55.062808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:12:56.336245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:12:56.336358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a13c58d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:12:56.336393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:12:56.336402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:12:56.336411 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:12:56.336420 | orchestrator | 2026-02-19 06:12:56.336428 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:12:56.336437 | orchestrator | Thursday 19 February 2026 06:12:55 +0000 (0:00:01.276) 0:29:41.264 ***** 2026-02-19 06:12:56.336516 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:56.336528 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:56.336536 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:56.336607 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:56.336626 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:56.336634 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:56.336642 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:12:56.336663 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a13c58d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1', 'scsi-SQEMU_QEMU_HARDDISK_a13c58d9-588a-42a7-bf8f-f8f5c3b5c1a4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:13:30.208025 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:13:30.208107 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:13:30.208115 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208121 | orchestrator | 2026-02-19 06:13:30.208127 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:13:30.208132 | orchestrator | Thursday 19 February 2026 06:12:56 +0000 (0:00:01.287) 0:29:42.551 ***** 2026-02-19 06:13:30.208137 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:13:30.208142 | orchestrator | 2026-02-19 06:13:30.208146 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:13:30.208151 | orchestrator | Thursday 19 February 2026 06:12:57 +0000 (0:00:01.531) 0:29:44.083 ***** 2026-02-19 06:13:30.208155 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:13:30.208160 | orchestrator | 2026-02-19 06:13:30.208164 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:13:30.208169 | orchestrator | Thursday 19 February 2026 06:12:58 +0000 (0:00:01.124) 0:29:45.208 ***** 2026-02-19 06:13:30.208173 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:13:30.208178 | orchestrator | 2026-02-19 06:13:30.208182 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:13:30.208186 | orchestrator | Thursday 19 February 2026 06:13:00 +0000 (0:00:01.473) 0:29:46.681 ***** 2026-02-19 06:13:30.208191 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208195 | orchestrator | 2026-02-19 06:13:30.208200 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:13:30.208204 | orchestrator | Thursday 19 February 2026 06:13:01 +0000 (0:00:01.142) 0:29:47.824 ***** 2026-02-19 06:13:30.208208 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208213 | orchestrator | 2026-02-19 06:13:30.208217 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:13:30.208222 | orchestrator | Thursday 19 February 2026 06:13:02 +0000 (0:00:01.198) 0:29:49.023 ***** 2026-02-19 06:13:30.208226 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208230 | orchestrator | 2026-02-19 06:13:30.208235 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:13:30.208239 | orchestrator | Thursday 19 February 2026 06:13:03 +0000 (0:00:01.122) 0:29:50.146 ***** 2026-02-19 06:13:30.208243 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-19 06:13:30.208248 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-19 06:13:30.208274 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 06:13:30.208282 | orchestrator | 2026-02-19 06:13:30.208289 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:13:30.208296 | orchestrator | Thursday 19 February 2026 06:13:05 +0000 (0:00:01.672) 0:29:51.818 ***** 2026-02-19 06:13:30.208303 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-19 06:13:30.208310 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-19 06:13:30.208316 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-19 06:13:30.208322 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208328 | orchestrator | 2026-02-19 06:13:30.208334 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:13:30.208355 | orchestrator | Thursday 19 February 2026 06:13:06 +0000 (0:00:01.151) 0:29:52.970 ***** 2026-02-19 06:13:30.208363 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208370 | orchestrator | 2026-02-19 06:13:30.208378 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:13:30.208386 | orchestrator | Thursday 19 February 2026 06:13:07 +0000 (0:00:01.116) 0:29:54.086 ***** 2026-02-19 06:13:30.208394 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:13:30.208400 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:13:30.208405 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 06:13:30.208409 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:13:30.208413 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:13:30.208418 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:13:30.208433 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:13:30.208438 | orchestrator | 2026-02-19 06:13:30.208442 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:13:30.208446 | orchestrator | Thursday 19 February 2026 06:13:09 +0000 (0:00:02.125) 0:29:56.212 ***** 2026-02-19 06:13:30.208451 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:13:30.208455 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:13:30.208459 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 06:13:30.208464 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:13:30.208468 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:13:30.208472 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:13:30.208477 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:13:30.208481 | orchestrator | 2026-02-19 06:13:30.208485 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:13:30.208489 | orchestrator | Thursday 19 February 2026 06:13:12 +0000 (0:00:02.215) 0:29:58.427 ***** 2026-02-19 06:13:30.208494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-19 06:13:30.208499 | orchestrator | 2026-02-19 06:13:30.208503 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:13:30.208508 | orchestrator | Thursday 19 February 2026 06:13:13 +0000 (0:00:01.167) 0:29:59.595 ***** 2026-02-19 06:13:30.208512 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-19 06:13:30.208516 | orchestrator | 2026-02-19 06:13:30.208521 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:13:30.208525 | orchestrator | Thursday 19 February 2026 06:13:14 +0000 (0:00:01.171) 0:30:00.766 ***** 2026-02-19 06:13:30.208535 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:13:30.208539 | orchestrator | 2026-02-19 06:13:30.208543 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:13:30.208548 | orchestrator | Thursday 19 February 2026 06:13:16 +0000 (0:00:01.538) 0:30:02.305 ***** 2026-02-19 06:13:30.208552 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208556 | orchestrator | 2026-02-19 06:13:30.208561 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:13:30.208598 | orchestrator | Thursday 19 February 2026 06:13:17 +0000 (0:00:01.102) 0:30:03.407 ***** 2026-02-19 06:13:30.208604 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208609 | orchestrator | 2026-02-19 06:13:30.208614 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:13:30.208618 | orchestrator | Thursday 19 February 2026 06:13:18 +0000 (0:00:01.152) 0:30:04.559 ***** 2026-02-19 06:13:30.208623 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208628 | orchestrator | 2026-02-19 06:13:30.208633 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:13:30.208638 | orchestrator | Thursday 19 February 2026 06:13:19 +0000 (0:00:01.130) 0:30:05.690 ***** 2026-02-19 06:13:30.208643 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:13:30.208648 | orchestrator | 2026-02-19 06:13:30.208654 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:13:30.208658 | orchestrator | Thursday 19 February 2026 06:13:21 +0000 (0:00:01.585) 0:30:07.275 ***** 2026-02-19 06:13:30.208663 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208668 | orchestrator | 2026-02-19 06:13:30.208673 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:13:30.208678 | orchestrator | Thursday 19 February 2026 06:13:22 +0000 (0:00:01.085) 0:30:08.361 ***** 2026-02-19 06:13:30.208683 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208687 | orchestrator | 2026-02-19 06:13:30.208692 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:13:30.208697 | orchestrator | Thursday 19 February 2026 06:13:23 +0000 (0:00:01.108) 0:30:09.470 ***** 2026-02-19 06:13:30.208702 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:13:30.208707 | orchestrator | 2026-02-19 06:13:30.208712 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:13:30.208717 | orchestrator | Thursday 19 February 2026 06:13:24 +0000 (0:00:01.522) 0:30:10.992 ***** 2026-02-19 06:13:30.208722 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:13:30.208727 | orchestrator | 2026-02-19 06:13:30.208732 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:13:30.208739 | orchestrator | Thursday 19 February 2026 06:13:26 +0000 (0:00:01.586) 0:30:12.579 ***** 2026-02-19 06:13:30.208751 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208759 | orchestrator | 2026-02-19 06:13:30.208767 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:13:30.208774 | orchestrator | Thursday 19 February 2026 06:13:27 +0000 (0:00:00.756) 0:30:13.335 ***** 2026-02-19 06:13:30.208782 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:13:30.208791 | orchestrator | 2026-02-19 06:13:30.208799 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:13:30.208808 | orchestrator | Thursday 19 February 2026 06:13:27 +0000 (0:00:00.781) 0:30:14.117 ***** 2026-02-19 06:13:30.208816 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208821 | orchestrator | 2026-02-19 06:13:30.208826 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:13:30.208831 | orchestrator | Thursday 19 February 2026 06:13:28 +0000 (0:00:00.770) 0:30:14.887 ***** 2026-02-19 06:13:30.208836 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:13:30.208841 | orchestrator | 2026-02-19 06:13:30.208845 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:13:30.208851 | orchestrator | Thursday 19 February 2026 06:13:29 +0000 (0:00:00.760) 0:30:15.647 ***** 2026-02-19 06:13:30.208864 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033252 | orchestrator | 2026-02-19 06:14:11.033363 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:14:11.033377 | orchestrator | Thursday 19 February 2026 06:13:30 +0000 (0:00:00.769) 0:30:16.417 ***** 2026-02-19 06:14:11.033386 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033395 | orchestrator | 2026-02-19 06:14:11.033403 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:14:11.033411 | orchestrator | Thursday 19 February 2026 06:13:30 +0000 (0:00:00.782) 0:30:17.200 ***** 2026-02-19 06:14:11.033418 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033426 | orchestrator | 2026-02-19 06:14:11.033434 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:14:11.033441 | orchestrator | Thursday 19 February 2026 06:13:31 +0000 (0:00:00.769) 0:30:17.969 ***** 2026-02-19 06:14:11.033449 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.033457 | orchestrator | 2026-02-19 06:14:11.033465 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:14:11.033472 | orchestrator | Thursday 19 February 2026 06:13:32 +0000 (0:00:00.797) 0:30:18.767 ***** 2026-02-19 06:14:11.033480 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.033487 | orchestrator | 2026-02-19 06:14:11.033494 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:14:11.033502 | orchestrator | Thursday 19 February 2026 06:13:33 +0000 (0:00:00.781) 0:30:19.549 ***** 2026-02-19 06:14:11.033510 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.033517 | orchestrator | 2026-02-19 06:14:11.033525 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:14:11.033532 | orchestrator | Thursday 19 February 2026 06:13:34 +0000 (0:00:00.813) 0:30:20.362 ***** 2026-02-19 06:14:11.033540 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033547 | orchestrator | 2026-02-19 06:14:11.033554 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:14:11.033562 | orchestrator | Thursday 19 February 2026 06:13:34 +0000 (0:00:00.775) 0:30:21.138 ***** 2026-02-19 06:14:11.033569 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033576 | orchestrator | 2026-02-19 06:14:11.033628 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:14:11.033636 | orchestrator | Thursday 19 February 2026 06:13:35 +0000 (0:00:00.752) 0:30:21.891 ***** 2026-02-19 06:14:11.033644 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033651 | orchestrator | 2026-02-19 06:14:11.033659 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:14:11.033666 | orchestrator | Thursday 19 February 2026 06:13:36 +0000 (0:00:00.787) 0:30:22.678 ***** 2026-02-19 06:14:11.033673 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033681 | orchestrator | 2026-02-19 06:14:11.033688 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:14:11.033695 | orchestrator | Thursday 19 February 2026 06:13:37 +0000 (0:00:00.797) 0:30:23.476 ***** 2026-02-19 06:14:11.033702 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033710 | orchestrator | 2026-02-19 06:14:11.033717 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:14:11.033724 | orchestrator | Thursday 19 February 2026 06:13:38 +0000 (0:00:00.754) 0:30:24.230 ***** 2026-02-19 06:14:11.033732 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033739 | orchestrator | 2026-02-19 06:14:11.033746 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:14:11.033754 | orchestrator | Thursday 19 February 2026 06:13:38 +0000 (0:00:00.760) 0:30:24.991 ***** 2026-02-19 06:14:11.033761 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033768 | orchestrator | 2026-02-19 06:14:11.033777 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:14:11.033786 | orchestrator | Thursday 19 February 2026 06:13:39 +0000 (0:00:00.756) 0:30:25.748 ***** 2026-02-19 06:14:11.033818 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033827 | orchestrator | 2026-02-19 06:14:11.033836 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:14:11.033844 | orchestrator | Thursday 19 February 2026 06:13:40 +0000 (0:00:00.757) 0:30:26.506 ***** 2026-02-19 06:14:11.033853 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033861 | orchestrator | 2026-02-19 06:14:11.033870 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:14:11.033878 | orchestrator | Thursday 19 February 2026 06:13:41 +0000 (0:00:00.746) 0:30:27.253 ***** 2026-02-19 06:14:11.033887 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033895 | orchestrator | 2026-02-19 06:14:11.033903 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:14:11.033911 | orchestrator | Thursday 19 February 2026 06:13:41 +0000 (0:00:00.748) 0:30:28.001 ***** 2026-02-19 06:14:11.033918 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033925 | orchestrator | 2026-02-19 06:14:11.033945 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:14:11.033953 | orchestrator | Thursday 19 February 2026 06:13:42 +0000 (0:00:00.749) 0:30:28.751 ***** 2026-02-19 06:14:11.033960 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.033967 | orchestrator | 2026-02-19 06:14:11.033974 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:14:11.033982 | orchestrator | Thursday 19 February 2026 06:13:43 +0000 (0:00:00.754) 0:30:29.506 ***** 2026-02-19 06:14:11.033989 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.033996 | orchestrator | 2026-02-19 06:14:11.034003 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:14:11.034011 | orchestrator | Thursday 19 February 2026 06:13:44 +0000 (0:00:01.619) 0:30:31.126 ***** 2026-02-19 06:14:11.034061 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.034069 | orchestrator | 2026-02-19 06:14:11.034076 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:14:11.034083 | orchestrator | Thursday 19 February 2026 06:13:47 +0000 (0:00:02.149) 0:30:33.275 ***** 2026-02-19 06:14:11.034091 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-19 06:14:11.034099 | orchestrator | 2026-02-19 06:14:11.034122 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:14:11.034130 | orchestrator | Thursday 19 February 2026 06:13:48 +0000 (0:00:01.248) 0:30:34.524 ***** 2026-02-19 06:14:11.034137 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034144 | orchestrator | 2026-02-19 06:14:11.034152 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:14:11.034159 | orchestrator | Thursday 19 February 2026 06:13:49 +0000 (0:00:01.101) 0:30:35.625 ***** 2026-02-19 06:14:11.034166 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034173 | orchestrator | 2026-02-19 06:14:11.034180 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:14:11.034188 | orchestrator | Thursday 19 February 2026 06:13:50 +0000 (0:00:01.086) 0:30:36.712 ***** 2026-02-19 06:14:11.034195 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:14:11.034202 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:14:11.034209 | orchestrator | 2026-02-19 06:14:11.034216 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:14:11.034224 | orchestrator | Thursday 19 February 2026 06:13:52 +0000 (0:00:01.860) 0:30:38.572 ***** 2026-02-19 06:14:11.034231 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.034238 | orchestrator | 2026-02-19 06:14:11.034245 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:14:11.034252 | orchestrator | Thursday 19 February 2026 06:13:53 +0000 (0:00:01.455) 0:30:40.028 ***** 2026-02-19 06:14:11.034281 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034288 | orchestrator | 2026-02-19 06:14:11.034304 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:14:11.034312 | orchestrator | Thursday 19 February 2026 06:13:54 +0000 (0:00:01.109) 0:30:41.138 ***** 2026-02-19 06:14:11.034319 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034326 | orchestrator | 2026-02-19 06:14:11.034333 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:14:11.034340 | orchestrator | Thursday 19 February 2026 06:13:55 +0000 (0:00:00.789) 0:30:41.927 ***** 2026-02-19 06:14:11.034347 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034355 | orchestrator | 2026-02-19 06:14:11.034362 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:14:11.034374 | orchestrator | Thursday 19 February 2026 06:13:56 +0000 (0:00:00.770) 0:30:42.698 ***** 2026-02-19 06:14:11.034386 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-19 06:14:11.034397 | orchestrator | 2026-02-19 06:14:11.034413 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:14:11.034428 | orchestrator | Thursday 19 February 2026 06:13:57 +0000 (0:00:01.094) 0:30:43.793 ***** 2026-02-19 06:14:11.034441 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.034451 | orchestrator | 2026-02-19 06:14:11.034462 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:14:11.034474 | orchestrator | Thursday 19 February 2026 06:14:00 +0000 (0:00:02.709) 0:30:46.502 ***** 2026-02-19 06:14:11.034497 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:14:11.034508 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:14:11.034528 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:14:11.034540 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034553 | orchestrator | 2026-02-19 06:14:11.034564 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:14:11.034576 | orchestrator | Thursday 19 February 2026 06:14:01 +0000 (0:00:01.175) 0:30:47.678 ***** 2026-02-19 06:14:11.034637 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034650 | orchestrator | 2026-02-19 06:14:11.034661 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:14:11.034671 | orchestrator | Thursday 19 February 2026 06:14:02 +0000 (0:00:01.101) 0:30:48.779 ***** 2026-02-19 06:14:11.034682 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034695 | orchestrator | 2026-02-19 06:14:11.034706 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:14:11.034719 | orchestrator | Thursday 19 February 2026 06:14:03 +0000 (0:00:01.151) 0:30:49.931 ***** 2026-02-19 06:14:11.034732 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034744 | orchestrator | 2026-02-19 06:14:11.034756 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:14:11.034767 | orchestrator | Thursday 19 February 2026 06:14:04 +0000 (0:00:01.186) 0:30:51.117 ***** 2026-02-19 06:14:11.034774 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034781 | orchestrator | 2026-02-19 06:14:11.034796 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:14:11.034803 | orchestrator | Thursday 19 February 2026 06:14:06 +0000 (0:00:01.123) 0:30:52.240 ***** 2026-02-19 06:14:11.034811 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:11.034818 | orchestrator | 2026-02-19 06:14:11.034825 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:14:11.034832 | orchestrator | Thursday 19 February 2026 06:14:06 +0000 (0:00:00.793) 0:30:53.034 ***** 2026-02-19 06:14:11.034839 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.034846 | orchestrator | 2026-02-19 06:14:11.034853 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:14:11.034868 | orchestrator | Thursday 19 February 2026 06:14:09 +0000 (0:00:02.284) 0:30:55.318 ***** 2026-02-19 06:14:11.034875 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:11.034883 | orchestrator | 2026-02-19 06:14:11.034890 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:14:11.034897 | orchestrator | Thursday 19 February 2026 06:14:09 +0000 (0:00:00.794) 0:30:56.113 ***** 2026-02-19 06:14:11.034904 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-19 06:14:11.034912 | orchestrator | 2026-02-19 06:14:11.034928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:14:47.166432 | orchestrator | Thursday 19 February 2026 06:14:11 +0000 (0:00:01.129) 0:30:57.243 ***** 2026-02-19 06:14:47.166545 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.166560 | orchestrator | 2026-02-19 06:14:47.166575 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:14:47.166586 | orchestrator | Thursday 19 February 2026 06:14:12 +0000 (0:00:01.124) 0:30:58.367 ***** 2026-02-19 06:14:47.166621 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.166634 | orchestrator | 2026-02-19 06:14:47.166646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:14:47.166659 | orchestrator | Thursday 19 February 2026 06:14:13 +0000 (0:00:01.138) 0:30:59.506 ***** 2026-02-19 06:14:47.166671 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.166682 | orchestrator | 2026-02-19 06:14:47.166692 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:14:47.166704 | orchestrator | Thursday 19 February 2026 06:14:14 +0000 (0:00:01.145) 0:31:00.652 ***** 2026-02-19 06:14:47.166714 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.166724 | orchestrator | 2026-02-19 06:14:47.166733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:14:47.166742 | orchestrator | Thursday 19 February 2026 06:14:15 +0000 (0:00:01.147) 0:31:01.800 ***** 2026-02-19 06:14:47.166752 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.166762 | orchestrator | 2026-02-19 06:14:47.166772 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:14:47.166783 | orchestrator | Thursday 19 February 2026 06:14:16 +0000 (0:00:01.138) 0:31:02.939 ***** 2026-02-19 06:14:47.166794 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.166805 | orchestrator | 2026-02-19 06:14:47.166816 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:14:47.166827 | orchestrator | Thursday 19 February 2026 06:14:17 +0000 (0:00:01.164) 0:31:04.104 ***** 2026-02-19 06:14:47.166838 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.166875 | orchestrator | 2026-02-19 06:14:47.166887 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:14:47.166898 | orchestrator | Thursday 19 February 2026 06:14:19 +0000 (0:00:01.137) 0:31:05.242 ***** 2026-02-19 06:14:47.166910 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.166922 | orchestrator | 2026-02-19 06:14:47.166933 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:14:47.166947 | orchestrator | Thursday 19 February 2026 06:14:20 +0000 (0:00:01.119) 0:31:06.361 ***** 2026-02-19 06:14:47.166964 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:14:47.166976 | orchestrator | 2026-02-19 06:14:47.166987 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:14:47.166999 | orchestrator | Thursday 19 February 2026 06:14:20 +0000 (0:00:00.851) 0:31:07.213 ***** 2026-02-19 06:14:47.167011 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-19 06:14:47.167023 | orchestrator | 2026-02-19 06:14:47.167035 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:14:47.167046 | orchestrator | Thursday 19 February 2026 06:14:22 +0000 (0:00:01.109) 0:31:08.322 ***** 2026-02-19 06:14:47.167057 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-19 06:14:47.167102 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-19 06:14:47.167117 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-19 06:14:47.167129 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-19 06:14:47.167139 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-19 06:14:47.167151 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-19 06:14:47.167162 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-19 06:14:47.167173 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:14:47.167183 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:14:47.167193 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:14:47.167203 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:14:47.167214 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:14:47.167226 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:14:47.167236 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:14:47.167247 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-19 06:14:47.167259 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-19 06:14:47.167269 | orchestrator | 2026-02-19 06:14:47.167297 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:14:47.167309 | orchestrator | Thursday 19 February 2026 06:14:28 +0000 (0:00:06.545) 0:31:14.868 ***** 2026-02-19 06:14:47.167320 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167339 | orchestrator | 2026-02-19 06:14:47.167352 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:14:47.167363 | orchestrator | Thursday 19 February 2026 06:14:29 +0000 (0:00:00.763) 0:31:15.632 ***** 2026-02-19 06:14:47.167373 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167384 | orchestrator | 2026-02-19 06:14:47.167394 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:14:47.167404 | orchestrator | Thursday 19 February 2026 06:14:30 +0000 (0:00:00.775) 0:31:16.408 ***** 2026-02-19 06:14:47.167415 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167426 | orchestrator | 2026-02-19 06:14:47.167436 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:14:47.167447 | orchestrator | Thursday 19 February 2026 06:14:30 +0000 (0:00:00.799) 0:31:17.207 ***** 2026-02-19 06:14:47.167458 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167469 | orchestrator | 2026-02-19 06:14:47.167479 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:14:47.167513 | orchestrator | Thursday 19 February 2026 06:14:31 +0000 (0:00:00.754) 0:31:17.962 ***** 2026-02-19 06:14:47.167525 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167536 | orchestrator | 2026-02-19 06:14:47.167547 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:14:47.167558 | orchestrator | Thursday 19 February 2026 06:14:32 +0000 (0:00:00.764) 0:31:18.726 ***** 2026-02-19 06:14:47.167569 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167580 | orchestrator | 2026-02-19 06:14:47.167618 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:14:47.167632 | orchestrator | Thursday 19 February 2026 06:14:33 +0000 (0:00:00.791) 0:31:19.518 ***** 2026-02-19 06:14:47.167644 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167655 | orchestrator | 2026-02-19 06:14:47.167667 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:14:47.167679 | orchestrator | Thursday 19 February 2026 06:14:34 +0000 (0:00:00.773) 0:31:20.292 ***** 2026-02-19 06:14:47.167690 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167701 | orchestrator | 2026-02-19 06:14:47.167712 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:14:47.167741 | orchestrator | Thursday 19 February 2026 06:14:34 +0000 (0:00:00.767) 0:31:21.060 ***** 2026-02-19 06:14:47.167753 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167763 | orchestrator | 2026-02-19 06:14:47.167774 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:14:47.167784 | orchestrator | Thursday 19 February 2026 06:14:35 +0000 (0:00:00.787) 0:31:21.847 ***** 2026-02-19 06:14:47.167795 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167806 | orchestrator | 2026-02-19 06:14:47.167816 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:14:47.167828 | orchestrator | Thursday 19 February 2026 06:14:36 +0000 (0:00:00.778) 0:31:22.625 ***** 2026-02-19 06:14:47.167839 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167850 | orchestrator | 2026-02-19 06:14:47.167861 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:14:47.167873 | orchestrator | Thursday 19 February 2026 06:14:37 +0000 (0:00:00.788) 0:31:23.414 ***** 2026-02-19 06:14:47.167884 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167895 | orchestrator | 2026-02-19 06:14:47.167906 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:14:47.167918 | orchestrator | Thursday 19 February 2026 06:14:37 +0000 (0:00:00.749) 0:31:24.163 ***** 2026-02-19 06:14:47.167929 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167941 | orchestrator | 2026-02-19 06:14:47.167952 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:14:47.167962 | orchestrator | Thursday 19 February 2026 06:14:38 +0000 (0:00:00.861) 0:31:25.025 ***** 2026-02-19 06:14:47.167974 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.167986 | orchestrator | 2026-02-19 06:14:47.167997 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:14:47.168008 | orchestrator | Thursday 19 February 2026 06:14:39 +0000 (0:00:00.760) 0:31:25.786 ***** 2026-02-19 06:14:47.168019 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.168031 | orchestrator | 2026-02-19 06:14:47.168042 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:14:47.168052 | orchestrator | Thursday 19 February 2026 06:14:40 +0000 (0:00:00.846) 0:31:26.633 ***** 2026-02-19 06:14:47.168061 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.168070 | orchestrator | 2026-02-19 06:14:47.168079 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:14:47.168090 | orchestrator | Thursday 19 February 2026 06:14:41 +0000 (0:00:00.778) 0:31:27.412 ***** 2026-02-19 06:14:47.168102 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.168113 | orchestrator | 2026-02-19 06:14:47.168124 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:14:47.168137 | orchestrator | Thursday 19 February 2026 06:14:41 +0000 (0:00:00.767) 0:31:28.179 ***** 2026-02-19 06:14:47.168148 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.168159 | orchestrator | 2026-02-19 06:14:47.168169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:14:47.168178 | orchestrator | Thursday 19 February 2026 06:14:42 +0000 (0:00:00.765) 0:31:28.944 ***** 2026-02-19 06:14:47.168187 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.168197 | orchestrator | 2026-02-19 06:14:47.168207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:14:47.168217 | orchestrator | Thursday 19 February 2026 06:14:43 +0000 (0:00:00.759) 0:31:29.704 ***** 2026-02-19 06:14:47.168237 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.168248 | orchestrator | 2026-02-19 06:14:47.168259 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:14:47.168270 | orchestrator | Thursday 19 February 2026 06:14:44 +0000 (0:00:00.773) 0:31:30.478 ***** 2026-02-19 06:14:47.168280 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.168299 | orchestrator | 2026-02-19 06:14:47.168310 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:14:47.168322 | orchestrator | Thursday 19 February 2026 06:14:45 +0000 (0:00:00.753) 0:31:31.231 ***** 2026-02-19 06:14:47.168333 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:14:47.168344 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:14:47.168355 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:14:47.168366 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:14:47.168377 | orchestrator | 2026-02-19 06:14:47.168388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:14:47.168397 | orchestrator | Thursday 19 February 2026 06:14:46 +0000 (0:00:01.100) 0:31:32.332 ***** 2026-02-19 06:14:47.168407 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:14:47.168426 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:15:44.265481 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:15:44.265599 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.265693 | orchestrator | 2026-02-19 06:15:44.265711 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:15:44.265724 | orchestrator | Thursday 19 February 2026 06:14:47 +0000 (0:00:01.039) 0:31:33.371 ***** 2026-02-19 06:15:44.265735 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-19 06:15:44.265746 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-19 06:15:44.265757 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-19 06:15:44.265768 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.265779 | orchestrator | 2026-02-19 06:15:44.265796 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:15:44.265817 | orchestrator | Thursday 19 February 2026 06:14:48 +0000 (0:00:01.084) 0:31:34.456 ***** 2026-02-19 06:15:44.265837 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.265856 | orchestrator | 2026-02-19 06:15:44.265876 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:15:44.265897 | orchestrator | Thursday 19 February 2026 06:14:49 +0000 (0:00:00.776) 0:31:35.232 ***** 2026-02-19 06:15:44.265918 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-19 06:15:44.265934 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.265945 | orchestrator | 2026-02-19 06:15:44.265956 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:15:44.265968 | orchestrator | Thursday 19 February 2026 06:14:49 +0000 (0:00:00.901) 0:31:36.134 ***** 2026-02-19 06:15:44.265979 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.265990 | orchestrator | 2026-02-19 06:15:44.266001 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:15:44.266014 | orchestrator | Thursday 19 February 2026 06:14:51 +0000 (0:00:01.394) 0:31:37.528 ***** 2026-02-19 06:15:44.266089 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:15:44.266103 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:15:44.266115 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-19 06:15:44.266128 | orchestrator | 2026-02-19 06:15:44.266140 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-19 06:15:44.266157 | orchestrator | Thursday 19 February 2026 06:14:52 +0000 (0:00:01.611) 0:31:39.140 ***** 2026-02-19 06:15:44.266176 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-19 06:15:44.266194 | orchestrator | 2026-02-19 06:15:44.266212 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-19 06:15:44.266231 | orchestrator | Thursday 19 February 2026 06:14:54 +0000 (0:00:01.102) 0:31:40.243 ***** 2026-02-19 06:15:44.266251 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.266271 | orchestrator | 2026-02-19 06:15:44.266314 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-19 06:15:44.266327 | orchestrator | Thursday 19 February 2026 06:14:55 +0000 (0:00:01.492) 0:31:41.736 ***** 2026-02-19 06:15:44.266338 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.266349 | orchestrator | 2026-02-19 06:15:44.266360 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-19 06:15:44.266371 | orchestrator | Thursday 19 February 2026 06:14:56 +0000 (0:00:01.147) 0:31:42.883 ***** 2026-02-19 06:15:44.266381 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:15:44.266393 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:15:44.266404 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:15:44.266414 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-19 06:15:44.266425 | orchestrator | 2026-02-19 06:15:44.266436 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-19 06:15:44.266447 | orchestrator | Thursday 19 February 2026 06:15:04 +0000 (0:00:07.699) 0:31:50.583 ***** 2026-02-19 06:15:44.266457 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.266468 | orchestrator | 2026-02-19 06:15:44.266479 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-19 06:15:44.266489 | orchestrator | Thursday 19 February 2026 06:15:05 +0000 (0:00:01.194) 0:31:51.777 ***** 2026-02-19 06:15:44.266500 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-19 06:15:44.266510 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-19 06:15:44.266521 | orchestrator | 2026-02-19 06:15:44.266543 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:15:44.266554 | orchestrator | Thursday 19 February 2026 06:15:08 +0000 (0:00:03.161) 0:31:54.939 ***** 2026-02-19 06:15:44.266565 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-19 06:15:44.266576 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-19 06:15:44.266586 | orchestrator | 2026-02-19 06:15:44.266597 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-19 06:15:44.266608 | orchestrator | Thursday 19 February 2026 06:15:10 +0000 (0:00:02.116) 0:31:57.056 ***** 2026-02-19 06:15:44.266642 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.266654 | orchestrator | 2026-02-19 06:15:44.266665 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-19 06:15:44.266676 | orchestrator | Thursday 19 February 2026 06:15:12 +0000 (0:00:01.505) 0:31:58.561 ***** 2026-02-19 06:15:44.266687 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.266698 | orchestrator | 2026-02-19 06:15:44.266708 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-19 06:15:44.266719 | orchestrator | Thursday 19 February 2026 06:15:13 +0000 (0:00:00.763) 0:31:59.325 ***** 2026-02-19 06:15:44.266729 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.266740 | orchestrator | 2026-02-19 06:15:44.266751 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-19 06:15:44.266781 | orchestrator | Thursday 19 February 2026 06:15:13 +0000 (0:00:00.753) 0:32:00.079 ***** 2026-02-19 06:15:44.266808 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-19 06:15:44.266828 | orchestrator | 2026-02-19 06:15:44.266839 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-19 06:15:44.266850 | orchestrator | Thursday 19 February 2026 06:15:14 +0000 (0:00:01.089) 0:32:01.168 ***** 2026-02-19 06:15:44.266861 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.266872 | orchestrator | 2026-02-19 06:15:44.266883 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-19 06:15:44.266893 | orchestrator | Thursday 19 February 2026 06:15:16 +0000 (0:00:01.137) 0:32:02.306 ***** 2026-02-19 06:15:44.266904 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.266915 | orchestrator | 2026-02-19 06:15:44.266926 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-19 06:15:44.266945 | orchestrator | Thursday 19 February 2026 06:15:17 +0000 (0:00:01.125) 0:32:03.431 ***** 2026-02-19 06:15:44.266956 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-19 06:15:44.266967 | orchestrator | 2026-02-19 06:15:44.266978 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-19 06:15:44.266988 | orchestrator | Thursday 19 February 2026 06:15:18 +0000 (0:00:01.208) 0:32:04.639 ***** 2026-02-19 06:15:44.266999 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.267010 | orchestrator | 2026-02-19 06:15:44.267021 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-19 06:15:44.267031 | orchestrator | Thursday 19 February 2026 06:15:20 +0000 (0:00:02.080) 0:32:06.720 ***** 2026-02-19 06:15:44.267042 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.267053 | orchestrator | 2026-02-19 06:15:44.267063 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-19 06:15:44.267074 | orchestrator | Thursday 19 February 2026 06:15:22 +0000 (0:00:02.006) 0:32:08.727 ***** 2026-02-19 06:15:44.267085 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.267095 | orchestrator | 2026-02-19 06:15:44.267106 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-19 06:15:44.267116 | orchestrator | Thursday 19 February 2026 06:15:25 +0000 (0:00:02.526) 0:32:11.253 ***** 2026-02-19 06:15:44.267127 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:15:44.267138 | orchestrator | 2026-02-19 06:15:44.267149 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-19 06:15:44.267159 | orchestrator | Thursday 19 February 2026 06:15:28 +0000 (0:00:03.705) 0:32:14.959 ***** 2026-02-19 06:15:44.267170 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-19 06:15:44.267181 | orchestrator | 2026-02-19 06:15:44.267191 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-19 06:15:44.267202 | orchestrator | Thursday 19 February 2026 06:15:30 +0000 (0:00:01.518) 0:32:16.478 ***** 2026-02-19 06:15:44.267213 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:15:44.267228 | orchestrator | 2026-02-19 06:15:44.267248 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-19 06:15:44.267268 | orchestrator | Thursday 19 February 2026 06:15:32 +0000 (0:00:02.455) 0:32:18.933 ***** 2026-02-19 06:15:44.267289 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:15:44.267309 | orchestrator | 2026-02-19 06:15:44.267324 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-19 06:15:44.267335 | orchestrator | Thursday 19 February 2026 06:15:35 +0000 (0:00:02.529) 0:32:21.463 ***** 2026-02-19 06:15:44.267346 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.267357 | orchestrator | 2026-02-19 06:15:44.267368 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-19 06:15:44.267378 | orchestrator | Thursday 19 February 2026 06:15:36 +0000 (0:00:01.303) 0:32:22.767 ***** 2026-02-19 06:15:44.267389 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:15:44.267400 | orchestrator | 2026-02-19 06:15:44.267413 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-19 06:15:44.267432 | orchestrator | Thursday 19 February 2026 06:15:37 +0000 (0:00:01.141) 0:32:23.909 ***** 2026-02-19 06:15:44.267450 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-19 06:15:44.267468 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-19 06:15:44.267488 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.267506 | orchestrator | 2026-02-19 06:15:44.267524 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-19 06:15:44.267542 | orchestrator | Thursday 19 February 2026 06:15:39 +0000 (0:00:01.580) 0:32:25.490 ***** 2026-02-19 06:15:44.267571 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-19 06:15:44.267592 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-19 06:15:44.267646 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-19 06:15:44.267658 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-19 06:15:44.267669 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:15:44.267680 | orchestrator | 2026-02-19 06:15:44.267694 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-19 06:15:44.267713 | orchestrator | 2026-02-19 06:15:44.267732 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:15:44.267751 | orchestrator | Thursday 19 February 2026 06:15:41 +0000 (0:00:01.834) 0:32:27.324 ***** 2026-02-19 06:15:44.267770 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:15:44.267790 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:15:44.267810 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:15:44.267831 | orchestrator | 2026-02-19 06:15:44.267850 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:15:44.267861 | orchestrator | Thursday 19 February 2026 06:15:42 +0000 (0:00:01.617) 0:32:28.941 ***** 2026-02-19 06:15:44.267872 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:15:44.267883 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:15:44.267894 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:15:44.267904 | orchestrator | 2026-02-19 06:15:44.267926 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-19 06:15:51.148244 | orchestrator | Thursday 19 February 2026 06:15:44 +0000 (0:00:01.532) 0:32:30.473 ***** 2026-02-19 06:15:51.148341 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:15:51.148353 | orchestrator | 2026-02-19 06:15:51.148362 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-19 06:15:51.148371 | orchestrator | Thursday 19 February 2026 06:15:47 +0000 (0:00:03.155) 0:32:33.629 ***** 2026-02-19 06:15:51.148380 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:15:51.148388 | orchestrator | 2026-02-19 06:15:51.148396 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-19 06:15:51.148404 | orchestrator | Thursday 19 February 2026 06:15:50 +0000 (0:00:03.169) 0:32:36.799 ***** 2026-02-19 06:15:51.148419 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-19T03:44:27.574016+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:51.148483 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-19T03:45:40.536875+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:51.148496 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-19T03:45:44.766371+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '86', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:51.148521 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-19T03:46:41.136819+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '62', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:51.576656 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-19T03:46:47.020504+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '62', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:51.576901 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-19T03:46:53.340565+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '64', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:51.576945 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-19T03:46:59.358457+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '175', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '64', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:51.576973 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-19T03:47:05.692848+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '66', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:51.576996 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-19T03:47:17.394362+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '66', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:53.315803 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-19T03:48:03.912773+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '85', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 85, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:53.315894 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-19T03:48:12.664947+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '94', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 94, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:53.315961 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-19T03:48:21.897806+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '185', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 185, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:53.315977 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-19T03:48:31.074064+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '108', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 108, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:15:53.316014 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-19T03:48:39.861465+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '117', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 117, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-19 06:17:42.980111 | orchestrator | 2026-02-19 06:17:42.980249 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-19 06:17:42.980268 | orchestrator | Thursday 19 February 2026 06:15:53 +0000 (0:00:02.733) 0:32:39.532 ***** 2026-02-19 06:17:42.980281 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:17:42.980293 | orchestrator | 2026-02-19 06:17:42.980305 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-19 06:17:42.980316 | orchestrator | Thursday 19 February 2026 06:15:56 +0000 (0:00:02.983) 0:32:42.516 ***** 2026-02-19 06:17:42.980328 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-19 06:17:42.980342 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-19 06:17:42.980353 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-19 06:17:42.980365 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-19 06:17:42.980377 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-19 06:17:42.980389 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-19 06:17:42.980400 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-19 06:17:42.980437 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-19 06:17:42.980449 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-19 06:17:42.980460 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-19 06:17:42.980471 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-19 06:17:42.980482 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-19 06:17:42.980493 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-19 06:17:42.980504 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-19 06:17:42.980515 | orchestrator | 2026-02-19 06:17:42.980526 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-19 06:17:42.980537 | orchestrator | Thursday 19 February 2026 06:17:14 +0000 (0:01:18.196) 0:34:00.713 ***** 2026-02-19 06:17:42.980547 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-19 06:17:42.980558 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-19 06:17:42.980569 | orchestrator | 2026-02-19 06:17:42.980580 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-19 06:17:42.980593 | orchestrator | 2026-02-19 06:17:42.980624 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:17:42.980685 | orchestrator | Thursday 19 February 2026 06:17:20 +0000 (0:00:05.947) 0:34:06.660 ***** 2026-02-19 06:17:42.980700 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-19 06:17:42.980713 | orchestrator | 2026-02-19 06:17:42.980727 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:17:42.980739 | orchestrator | Thursday 19 February 2026 06:17:21 +0000 (0:00:01.143) 0:34:07.804 ***** 2026-02-19 06:17:42.980752 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:17:42.980765 | orchestrator | 2026-02-19 06:17:42.980777 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:17:42.980806 | orchestrator | Thursday 19 February 2026 06:17:22 +0000 (0:00:01.420) 0:34:09.225 ***** 2026-02-19 06:17:42.980818 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:17:42.980831 | orchestrator | 2026-02-19 06:17:42.980843 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:17:42.980856 | orchestrator | Thursday 19 February 2026 06:17:24 +0000 (0:00:01.125) 0:34:10.351 ***** 2026-02-19 06:17:42.980869 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:17:42.980881 | orchestrator | 2026-02-19 06:17:42.980893 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:17:42.980905 | orchestrator | Thursday 19 February 2026 06:17:25 +0000 (0:00:01.436) 0:34:11.787 ***** 2026-02-19 06:17:42.980918 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:17:42.980930 | orchestrator | 2026-02-19 06:17:42.980942 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:17:42.980955 | orchestrator | Thursday 19 February 2026 06:17:26 +0000 (0:00:01.138) 0:34:12.926 ***** 2026-02-19 06:17:42.980968 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:17:42.980980 | orchestrator | 2026-02-19 06:17:42.980991 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:17:42.981001 | orchestrator | Thursday 19 February 2026 06:17:27 +0000 (0:00:01.124) 0:34:14.050 ***** 2026-02-19 06:17:42.981012 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:17:42.981023 | orchestrator | 2026-02-19 06:17:42.981034 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:17:42.981045 | orchestrator | Thursday 19 February 2026 06:17:28 +0000 (0:00:01.136) 0:34:15.187 ***** 2026-02-19 06:17:42.981056 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:17:42.981067 | orchestrator | 2026-02-19 06:17:42.981078 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:17:42.981118 | orchestrator | Thursday 19 February 2026 06:17:30 +0000 (0:00:01.133) 0:34:16.321 ***** 2026-02-19 06:17:42.981130 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:17:42.981141 | orchestrator | 2026-02-19 06:17:42.981152 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:17:42.981163 | orchestrator | Thursday 19 February 2026 06:17:31 +0000 (0:00:01.136) 0:34:17.457 ***** 2026-02-19 06:17:42.981175 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:17:42.981186 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:17:42.981196 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:17:42.981207 | orchestrator | 2026-02-19 06:17:42.981218 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:17:42.981229 | orchestrator | Thursday 19 February 2026 06:17:32 +0000 (0:00:01.628) 0:34:19.085 ***** 2026-02-19 06:17:42.981240 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:17:42.981251 | orchestrator | 2026-02-19 06:17:42.981262 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:17:42.981273 | orchestrator | Thursday 19 February 2026 06:17:34 +0000 (0:00:01.225) 0:34:20.311 ***** 2026-02-19 06:17:42.981287 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:17:42.981305 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:17:42.981322 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:17:42.981340 | orchestrator | 2026-02-19 06:17:42.981357 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:17:42.981375 | orchestrator | Thursday 19 February 2026 06:17:37 +0000 (0:00:03.298) 0:34:23.610 ***** 2026-02-19 06:17:42.981392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 06:17:42.981410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 06:17:42.981426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 06:17:42.981444 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:17:42.981463 | orchestrator | 2026-02-19 06:17:42.981482 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:17:42.981501 | orchestrator | Thursday 19 February 2026 06:17:38 +0000 (0:00:01.395) 0:34:25.006 ***** 2026-02-19 06:17:42.981522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:17:42.981544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:17:42.981564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:17:42.981583 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:17:42.981601 | orchestrator | 2026-02-19 06:17:42.981620 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:17:42.981639 | orchestrator | Thursday 19 February 2026 06:17:40 +0000 (0:00:01.888) 0:34:26.894 ***** 2026-02-19 06:17:42.981712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:17:42.981777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:17:42.981799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:17:42.981819 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:17:42.981838 | orchestrator | 2026-02-19 06:17:42.981857 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:17:42.981875 | orchestrator | Thursday 19 February 2026 06:17:41 +0000 (0:00:01.137) 0:34:28.031 ***** 2026-02-19 06:17:42.981912 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:17:34.638444', 'end': '2026-02-19 06:17:34.692905', 'delta': '0:00:00.054461', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:18:00.008213 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:17:35.211736', 'end': '2026-02-19 06:17:35.262132', 'delta': '0:00:00.050396', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:18:00.008355 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:17:36.142517', 'end': '2026-02-19 06:17:36.199140', 'delta': '0:00:00.056623', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:18:00.008375 | orchestrator | 2026-02-19 06:18:00.008388 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:18:00.008401 | orchestrator | Thursday 19 February 2026 06:17:42 +0000 (0:00:01.158) 0:34:29.190 ***** 2026-02-19 06:18:00.008411 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:00.008422 | orchestrator | 2026-02-19 06:18:00.008433 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:18:00.008470 | orchestrator | Thursday 19 February 2026 06:17:44 +0000 (0:00:01.580) 0:34:30.770 ***** 2026-02-19 06:18:00.008480 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:00.008491 | orchestrator | 2026-02-19 06:18:00.008500 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:18:00.008510 | orchestrator | Thursday 19 February 2026 06:17:45 +0000 (0:00:01.250) 0:34:32.021 ***** 2026-02-19 06:18:00.008520 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:00.008530 | orchestrator | 2026-02-19 06:18:00.008553 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:18:00.008564 | orchestrator | Thursday 19 February 2026 06:17:46 +0000 (0:00:01.121) 0:34:33.143 ***** 2026-02-19 06:18:00.008573 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:18:00.008585 | orchestrator | 2026-02-19 06:18:00.008660 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:18:00.008680 | orchestrator | Thursday 19 February 2026 06:17:48 +0000 (0:00:01.977) 0:34:35.121 ***** 2026-02-19 06:18:00.008696 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:00.008713 | orchestrator | 2026-02-19 06:18:00.008730 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:18:00.008747 | orchestrator | Thursday 19 February 2026 06:17:50 +0000 (0:00:01.125) 0:34:36.246 ***** 2026-02-19 06:18:00.008762 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:00.008773 | orchestrator | 2026-02-19 06:18:00.008784 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:18:00.008796 | orchestrator | Thursday 19 February 2026 06:17:51 +0000 (0:00:01.151) 0:34:37.397 ***** 2026-02-19 06:18:00.008808 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:00.008819 | orchestrator | 2026-02-19 06:18:00.008831 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:18:00.008842 | orchestrator | Thursday 19 February 2026 06:17:52 +0000 (0:00:01.206) 0:34:38.604 ***** 2026-02-19 06:18:00.008853 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:00.008863 | orchestrator | 2026-02-19 06:18:00.008873 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:18:00.008882 | orchestrator | Thursday 19 February 2026 06:17:53 +0000 (0:00:01.101) 0:34:39.706 ***** 2026-02-19 06:18:00.008892 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:00.008901 | orchestrator | 2026-02-19 06:18:00.008911 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:18:00.008920 | orchestrator | Thursday 19 February 2026 06:17:54 +0000 (0:00:01.083) 0:34:40.789 ***** 2026-02-19 06:18:00.008929 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:00.008939 | orchestrator | 2026-02-19 06:18:00.008948 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:18:00.008958 | orchestrator | Thursday 19 February 2026 06:17:55 +0000 (0:00:00.973) 0:34:41.762 ***** 2026-02-19 06:18:00.008968 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:00.008980 | orchestrator | 2026-02-19 06:18:00.008996 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:18:00.009012 | orchestrator | Thursday 19 February 2026 06:17:56 +0000 (0:00:01.086) 0:34:42.849 ***** 2026-02-19 06:18:00.009029 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:00.009044 | orchestrator | 2026-02-19 06:18:00.009060 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:18:00.009076 | orchestrator | Thursday 19 February 2026 06:17:57 +0000 (0:00:00.935) 0:34:43.784 ***** 2026-02-19 06:18:00.009115 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:00.009133 | orchestrator | 2026-02-19 06:18:00.009148 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:18:00.009159 | orchestrator | Thursday 19 February 2026 06:17:58 +0000 (0:00:00.897) 0:34:44.681 ***** 2026-02-19 06:18:00.009168 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:00.009178 | orchestrator | 2026-02-19 06:18:00.009188 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:18:00.009208 | orchestrator | Thursday 19 February 2026 06:17:59 +0000 (0:00:01.122) 0:34:45.804 ***** 2026-02-19 06:18:00.009220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:18:00.009234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'uuids': ['76bd5aba-0bb7-430d-953d-ee2f2591c83e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV']}})  2026-02-19 06:18:00.009253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50533a39', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:18:00.009266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159']}})  2026-02-19 06:18:00.009277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:18:00.009287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:18:00.009306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:18:01.491023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:18:01.491126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl', 'dm-uuid-CRYPT-LUKS2-96c3bdbb8dfb4f8d89601607ffc96021-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:18:01.491247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:18:01.491280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'uuids': ['96c3bdbb-8dfb-4f8d-8960-1607ffc96021'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl']}})  2026-02-19 06:18:01.491293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542']}})  2026-02-19 06:18:01.491305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:18:01.491340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23a82e55', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:18:01.491390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:18:01.491409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:18:01.491425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV', 'dm-uuid-CRYPT-LUKS2-76bd5aba0bb7430d953dee2f2591c83e-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:18:01.491444 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:01.491463 | orchestrator | 2026-02-19 06:18:01.491482 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:18:01.491501 | orchestrator | Thursday 19 February 2026 06:18:01 +0000 (0:00:01.689) 0:34:47.493 ***** 2026-02-19 06:18:01.491518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:01.491562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'uuids': ['76bd5aba-0bb7-430d-953d-ee2f2591c83e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50533a39', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618803 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl', 'dm-uuid-CRYPT-LUKS2-96c3bdbb8dfb4f8d89601607ffc96021-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'uuids': ['96c3bdbb-8dfb-4f8d-8960-1607ffc96021'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:02.618919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:21.810415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23a82e55', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:21.810583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:21.810627 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:21.810690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV', 'dm-uuid-CRYPT-LUKS2-76bd5aba0bb7430d953dee2f2591c83e-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:18:21.810705 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.810719 | orchestrator | 2026-02-19 06:18:21.810732 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:18:21.810744 | orchestrator | Thursday 19 February 2026 06:18:02 +0000 (0:00:01.342) 0:34:48.836 ***** 2026-02-19 06:18:21.810755 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:21.810766 | orchestrator | 2026-02-19 06:18:21.810778 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:18:21.810789 | orchestrator | Thursday 19 February 2026 06:18:04 +0000 (0:00:01.467) 0:34:50.304 ***** 2026-02-19 06:18:21.810800 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:21.810811 | orchestrator | 2026-02-19 06:18:21.810822 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:18:21.810833 | orchestrator | Thursday 19 February 2026 06:18:05 +0000 (0:00:01.132) 0:34:51.437 ***** 2026-02-19 06:18:21.810844 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:21.810856 | orchestrator | 2026-02-19 06:18:21.810867 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:18:21.810878 | orchestrator | Thursday 19 February 2026 06:18:06 +0000 (0:00:01.434) 0:34:52.871 ***** 2026-02-19 06:18:21.810889 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.810901 | orchestrator | 2026-02-19 06:18:21.810912 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:18:21.810923 | orchestrator | Thursday 19 February 2026 06:18:07 +0000 (0:00:01.108) 0:34:53.980 ***** 2026-02-19 06:18:21.810934 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.810946 | orchestrator | 2026-02-19 06:18:21.810963 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:18:21.810974 | orchestrator | Thursday 19 February 2026 06:18:08 +0000 (0:00:01.229) 0:34:55.209 ***** 2026-02-19 06:18:21.810985 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.810996 | orchestrator | 2026-02-19 06:18:21.811008 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:18:21.811018 | orchestrator | Thursday 19 February 2026 06:18:10 +0000 (0:00:01.109) 0:34:56.319 ***** 2026-02-19 06:18:21.811039 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-19 06:18:21.811051 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-19 06:18:21.811063 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-19 06:18:21.811074 | orchestrator | 2026-02-19 06:18:21.811085 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:18:21.811097 | orchestrator | Thursday 19 February 2026 06:18:11 +0000 (0:00:01.892) 0:34:58.212 ***** 2026-02-19 06:18:21.811108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 06:18:21.811119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 06:18:21.811131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 06:18:21.811142 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.811152 | orchestrator | 2026-02-19 06:18:21.811163 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:18:21.811175 | orchestrator | Thursday 19 February 2026 06:18:13 +0000 (0:00:01.119) 0:34:59.332 ***** 2026-02-19 06:18:21.811186 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-19 06:18:21.811198 | orchestrator | 2026-02-19 06:18:21.811210 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:18:21.811222 | orchestrator | Thursday 19 February 2026 06:18:14 +0000 (0:00:01.101) 0:35:00.434 ***** 2026-02-19 06:18:21.811234 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.811245 | orchestrator | 2026-02-19 06:18:21.811256 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:18:21.811268 | orchestrator | Thursday 19 February 2026 06:18:15 +0000 (0:00:01.153) 0:35:01.587 ***** 2026-02-19 06:18:21.811279 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.811289 | orchestrator | 2026-02-19 06:18:21.811301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:18:21.811312 | orchestrator | Thursday 19 February 2026 06:18:16 +0000 (0:00:01.113) 0:35:02.701 ***** 2026-02-19 06:18:21.811348 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.811360 | orchestrator | 2026-02-19 06:18:21.811371 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:18:21.811382 | orchestrator | Thursday 19 February 2026 06:18:17 +0000 (0:00:01.175) 0:35:03.877 ***** 2026-02-19 06:18:21.811393 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:18:21.811404 | orchestrator | 2026-02-19 06:18:21.811415 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:18:21.811425 | orchestrator | Thursday 19 February 2026 06:18:19 +0000 (0:00:01.361) 0:35:05.238 ***** 2026-02-19 06:18:21.811436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:18:21.811448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:18:21.811459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:18:21.811470 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.811481 | orchestrator | 2026-02-19 06:18:21.811492 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:18:21.811504 | orchestrator | Thursday 19 February 2026 06:18:20 +0000 (0:00:01.417) 0:35:06.656 ***** 2026-02-19 06:18:21.811565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:18:21.811577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:18:21.811588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:18:21.811599 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:18:21.811610 | orchestrator | 2026-02-19 06:18:21.811630 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:19:08.866070 | orchestrator | Thursday 19 February 2026 06:18:21 +0000 (0:00:01.364) 0:35:08.021 ***** 2026-02-19 06:19:08.866188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:19:08.866231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:19:08.866244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:19:08.866255 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.866267 | orchestrator | 2026-02-19 06:19:08.866279 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:19:08.866291 | orchestrator | Thursday 19 February 2026 06:18:23 +0000 (0:00:01.406) 0:35:09.427 ***** 2026-02-19 06:19:08.866302 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.866314 | orchestrator | 2026-02-19 06:19:08.866405 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:19:08.866419 | orchestrator | Thursday 19 February 2026 06:18:24 +0000 (0:00:01.162) 0:35:10.589 ***** 2026-02-19 06:19:08.866430 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 06:19:08.866441 | orchestrator | 2026-02-19 06:19:08.866452 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:19:08.866463 | orchestrator | Thursday 19 February 2026 06:18:25 +0000 (0:00:01.332) 0:35:11.922 ***** 2026-02-19 06:19:08.866482 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:19:08.866501 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:19:08.866518 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:19:08.866537 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 06:19:08.866576 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:19:08.866596 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:19:08.866616 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:19:08.866629 | orchestrator | 2026-02-19 06:19:08.866642 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:19:08.866654 | orchestrator | Thursday 19 February 2026 06:18:27 +0000 (0:00:02.033) 0:35:13.956 ***** 2026-02-19 06:19:08.866667 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:19:08.866680 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:19:08.866692 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:19:08.866705 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 06:19:08.866718 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:19:08.866730 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:19:08.866742 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:19:08.866754 | orchestrator | 2026-02-19 06:19:08.866767 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-19 06:19:08.866779 | orchestrator | Thursday 19 February 2026 06:18:30 +0000 (0:00:02.490) 0:35:16.446 ***** 2026-02-19 06:19:08.866791 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.866804 | orchestrator | 2026-02-19 06:19:08.866816 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-19 06:19:08.866828 | orchestrator | Thursday 19 February 2026 06:18:31 +0000 (0:00:01.484) 0:35:17.931 ***** 2026-02-19 06:19:08.866841 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.866854 | orchestrator | 2026-02-19 06:19:08.866866 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-19 06:19:08.866879 | orchestrator | Thursday 19 February 2026 06:18:32 +0000 (0:00:01.122) 0:35:19.054 ***** 2026-02-19 06:19:08.866891 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.866904 | orchestrator | 2026-02-19 06:19:08.866916 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-19 06:19:08.866939 | orchestrator | Thursday 19 February 2026 06:18:34 +0000 (0:00:01.611) 0:35:20.665 ***** 2026-02-19 06:19:08.866950 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-19 06:19:08.866961 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-19 06:19:08.866972 | orchestrator | 2026-02-19 06:19:08.866990 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:19:08.867006 | orchestrator | Thursday 19 February 2026 06:18:38 +0000 (0:00:04.228) 0:35:24.894 ***** 2026-02-19 06:19:08.867032 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-19 06:19:08.867053 | orchestrator | 2026-02-19 06:19:08.867071 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:19:08.867089 | orchestrator | Thursday 19 February 2026 06:18:39 +0000 (0:00:01.096) 0:35:25.990 ***** 2026-02-19 06:19:08.867106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-19 06:19:08.867123 | orchestrator | 2026-02-19 06:19:08.867140 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:19:08.867158 | orchestrator | Thursday 19 February 2026 06:18:40 +0000 (0:00:01.155) 0:35:27.145 ***** 2026-02-19 06:19:08.867176 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.867194 | orchestrator | 2026-02-19 06:19:08.867213 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:19:08.867231 | orchestrator | Thursday 19 February 2026 06:18:42 +0000 (0:00:01.101) 0:35:28.247 ***** 2026-02-19 06:19:08.867250 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.867269 | orchestrator | 2026-02-19 06:19:08.867283 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:19:08.867314 | orchestrator | Thursday 19 February 2026 06:18:43 +0000 (0:00:01.501) 0:35:29.748 ***** 2026-02-19 06:19:08.867363 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.867374 | orchestrator | 2026-02-19 06:19:08.867385 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:19:08.867396 | orchestrator | Thursday 19 February 2026 06:18:45 +0000 (0:00:01.530) 0:35:31.279 ***** 2026-02-19 06:19:08.867407 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.867418 | orchestrator | 2026-02-19 06:19:08.867429 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:19:08.867439 | orchestrator | Thursday 19 February 2026 06:18:46 +0000 (0:00:01.528) 0:35:32.808 ***** 2026-02-19 06:19:08.867450 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.867461 | orchestrator | 2026-02-19 06:19:08.867472 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:19:08.867483 | orchestrator | Thursday 19 February 2026 06:18:47 +0000 (0:00:01.129) 0:35:33.937 ***** 2026-02-19 06:19:08.867494 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.867504 | orchestrator | 2026-02-19 06:19:08.867516 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:19:08.867536 | orchestrator | Thursday 19 February 2026 06:18:48 +0000 (0:00:01.154) 0:35:35.092 ***** 2026-02-19 06:19:08.867552 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.867570 | orchestrator | 2026-02-19 06:19:08.867589 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:19:08.867609 | orchestrator | Thursday 19 February 2026 06:18:49 +0000 (0:00:01.107) 0:35:36.199 ***** 2026-02-19 06:19:08.867626 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.867646 | orchestrator | 2026-02-19 06:19:08.867657 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:19:08.867677 | orchestrator | Thursday 19 February 2026 06:18:51 +0000 (0:00:01.522) 0:35:37.721 ***** 2026-02-19 06:19:08.867688 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.867699 | orchestrator | 2026-02-19 06:19:08.867709 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:19:08.867720 | orchestrator | Thursday 19 February 2026 06:18:53 +0000 (0:00:01.545) 0:35:39.267 ***** 2026-02-19 06:19:08.867743 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.867754 | orchestrator | 2026-02-19 06:19:08.867765 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:19:08.867775 | orchestrator | Thursday 19 February 2026 06:18:54 +0000 (0:00:01.128) 0:35:40.395 ***** 2026-02-19 06:19:08.867786 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.867797 | orchestrator | 2026-02-19 06:19:08.867808 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:19:08.867818 | orchestrator | Thursday 19 February 2026 06:18:55 +0000 (0:00:01.092) 0:35:41.488 ***** 2026-02-19 06:19:08.867829 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.867840 | orchestrator | 2026-02-19 06:19:08.867851 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:19:08.867861 | orchestrator | Thursday 19 February 2026 06:18:56 +0000 (0:00:01.187) 0:35:42.675 ***** 2026-02-19 06:19:08.867872 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.867882 | orchestrator | 2026-02-19 06:19:08.867893 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:19:08.867904 | orchestrator | Thursday 19 February 2026 06:18:57 +0000 (0:00:01.132) 0:35:43.808 ***** 2026-02-19 06:19:08.867914 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.867925 | orchestrator | 2026-02-19 06:19:08.867936 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:19:08.867947 | orchestrator | Thursday 19 February 2026 06:18:58 +0000 (0:00:01.131) 0:35:44.940 ***** 2026-02-19 06:19:08.867957 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.867968 | orchestrator | 2026-02-19 06:19:08.867979 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:19:08.867990 | orchestrator | Thursday 19 February 2026 06:18:59 +0000 (0:00:01.117) 0:35:46.057 ***** 2026-02-19 06:19:08.868000 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.868011 | orchestrator | 2026-02-19 06:19:08.868022 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:19:08.868033 | orchestrator | Thursday 19 February 2026 06:19:00 +0000 (0:00:01.109) 0:35:47.167 ***** 2026-02-19 06:19:08.868044 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.868055 | orchestrator | 2026-02-19 06:19:08.868065 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:19:08.868076 | orchestrator | Thursday 19 February 2026 06:19:02 +0000 (0:00:01.110) 0:35:48.277 ***** 2026-02-19 06:19:08.868087 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.868097 | orchestrator | 2026-02-19 06:19:08.868108 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:19:08.868119 | orchestrator | Thursday 19 February 2026 06:19:03 +0000 (0:00:01.126) 0:35:49.404 ***** 2026-02-19 06:19:08.868129 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:08.868140 | orchestrator | 2026-02-19 06:19:08.868151 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:19:08.868161 | orchestrator | Thursday 19 February 2026 06:19:04 +0000 (0:00:01.146) 0:35:50.550 ***** 2026-02-19 06:19:08.868172 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.868183 | orchestrator | 2026-02-19 06:19:08.868193 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:19:08.868204 | orchestrator | Thursday 19 February 2026 06:19:05 +0000 (0:00:01.162) 0:35:51.713 ***** 2026-02-19 06:19:08.868215 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.868226 | orchestrator | 2026-02-19 06:19:08.868237 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:19:08.868247 | orchestrator | Thursday 19 February 2026 06:19:06 +0000 (0:00:01.134) 0:35:52.847 ***** 2026-02-19 06:19:08.868258 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.868269 | orchestrator | 2026-02-19 06:19:08.868279 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:19:08.868290 | orchestrator | Thursday 19 February 2026 06:19:07 +0000 (0:00:01.135) 0:35:53.983 ***** 2026-02-19 06:19:08.868307 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:08.868318 | orchestrator | 2026-02-19 06:19:08.868360 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:19:56.594710 | orchestrator | Thursday 19 February 2026 06:19:08 +0000 (0:00:01.096) 0:35:55.079 ***** 2026-02-19 06:19:56.594833 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.594849 | orchestrator | 2026-02-19 06:19:56.594861 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:19:56.594871 | orchestrator | Thursday 19 February 2026 06:19:09 +0000 (0:00:01.086) 0:35:56.166 ***** 2026-02-19 06:19:56.594889 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.594905 | orchestrator | 2026-02-19 06:19:56.594922 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:19:56.594939 | orchestrator | Thursday 19 February 2026 06:19:11 +0000 (0:00:01.074) 0:35:57.241 ***** 2026-02-19 06:19:56.594955 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.594972 | orchestrator | 2026-02-19 06:19:56.594989 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:19:56.595007 | orchestrator | Thursday 19 February 2026 06:19:12 +0000 (0:00:01.055) 0:35:58.297 ***** 2026-02-19 06:19:56.595023 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595041 | orchestrator | 2026-02-19 06:19:56.595057 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:19:56.595073 | orchestrator | Thursday 19 February 2026 06:19:13 +0000 (0:00:01.147) 0:35:59.444 ***** 2026-02-19 06:19:56.595091 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595109 | orchestrator | 2026-02-19 06:19:56.595126 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:19:56.595143 | orchestrator | Thursday 19 February 2026 06:19:14 +0000 (0:00:01.111) 0:36:00.556 ***** 2026-02-19 06:19:56.595203 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595222 | orchestrator | 2026-02-19 06:19:56.595242 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:19:56.595260 | orchestrator | Thursday 19 February 2026 06:19:15 +0000 (0:00:01.127) 0:36:01.683 ***** 2026-02-19 06:19:56.595278 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595296 | orchestrator | 2026-02-19 06:19:56.595313 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:19:56.595329 | orchestrator | Thursday 19 February 2026 06:19:16 +0000 (0:00:01.121) 0:36:02.805 ***** 2026-02-19 06:19:56.595347 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595363 | orchestrator | 2026-02-19 06:19:56.595380 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:19:56.595513 | orchestrator | Thursday 19 February 2026 06:19:17 +0000 (0:00:01.166) 0:36:03.971 ***** 2026-02-19 06:19:56.595537 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:56.595548 | orchestrator | 2026-02-19 06:19:56.595558 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:19:56.595568 | orchestrator | Thursday 19 February 2026 06:19:19 +0000 (0:00:01.912) 0:36:05.883 ***** 2026-02-19 06:19:56.595578 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:56.595588 | orchestrator | 2026-02-19 06:19:56.595597 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:19:56.595607 | orchestrator | Thursday 19 February 2026 06:19:21 +0000 (0:00:02.233) 0:36:08.116 ***** 2026-02-19 06:19:56.595617 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-19 06:19:56.595628 | orchestrator | 2026-02-19 06:19:56.595637 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:19:56.595647 | orchestrator | Thursday 19 February 2026 06:19:22 +0000 (0:00:01.104) 0:36:09.221 ***** 2026-02-19 06:19:56.595657 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595667 | orchestrator | 2026-02-19 06:19:56.595676 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:19:56.595686 | orchestrator | Thursday 19 February 2026 06:19:24 +0000 (0:00:01.091) 0:36:10.313 ***** 2026-02-19 06:19:56.595716 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595727 | orchestrator | 2026-02-19 06:19:56.595736 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:19:56.595746 | orchestrator | Thursday 19 February 2026 06:19:25 +0000 (0:00:01.129) 0:36:11.442 ***** 2026-02-19 06:19:56.595755 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:19:56.595765 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:19:56.595774 | orchestrator | 2026-02-19 06:19:56.595785 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:19:56.595795 | orchestrator | Thursday 19 February 2026 06:19:27 +0000 (0:00:01.888) 0:36:13.330 ***** 2026-02-19 06:19:56.595804 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:56.595814 | orchestrator | 2026-02-19 06:19:56.595823 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:19:56.595833 | orchestrator | Thursday 19 February 2026 06:19:28 +0000 (0:00:01.451) 0:36:14.782 ***** 2026-02-19 06:19:56.595847 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595863 | orchestrator | 2026-02-19 06:19:56.595879 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:19:56.595895 | orchestrator | Thursday 19 February 2026 06:19:29 +0000 (0:00:01.138) 0:36:15.920 ***** 2026-02-19 06:19:56.595911 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595928 | orchestrator | 2026-02-19 06:19:56.595943 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:19:56.595961 | orchestrator | Thursday 19 February 2026 06:19:30 +0000 (0:00:01.133) 0:36:17.054 ***** 2026-02-19 06:19:56.595978 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.595993 | orchestrator | 2026-02-19 06:19:56.596008 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:19:56.596018 | orchestrator | Thursday 19 February 2026 06:19:31 +0000 (0:00:01.144) 0:36:18.199 ***** 2026-02-19 06:19:56.596027 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-19 06:19:56.596037 | orchestrator | 2026-02-19 06:19:56.596046 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:19:56.596076 | orchestrator | Thursday 19 February 2026 06:19:33 +0000 (0:00:01.108) 0:36:19.308 ***** 2026-02-19 06:19:56.596086 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:56.596096 | orchestrator | 2026-02-19 06:19:56.596105 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:19:56.596115 | orchestrator | Thursday 19 February 2026 06:19:34 +0000 (0:00:01.666) 0:36:20.975 ***** 2026-02-19 06:19:56.596125 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:19:56.596134 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:19:56.596144 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:19:56.596182 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596192 | orchestrator | 2026-02-19 06:19:56.596202 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:19:56.596211 | orchestrator | Thursday 19 February 2026 06:19:35 +0000 (0:00:01.136) 0:36:22.111 ***** 2026-02-19 06:19:56.596221 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596230 | orchestrator | 2026-02-19 06:19:56.596240 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:19:56.596249 | orchestrator | Thursday 19 February 2026 06:19:37 +0000 (0:00:01.132) 0:36:23.244 ***** 2026-02-19 06:19:56.596259 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596268 | orchestrator | 2026-02-19 06:19:56.596278 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:19:56.596287 | orchestrator | Thursday 19 February 2026 06:19:38 +0000 (0:00:01.151) 0:36:24.395 ***** 2026-02-19 06:19:56.596307 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596317 | orchestrator | 2026-02-19 06:19:56.596333 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:19:56.596343 | orchestrator | Thursday 19 February 2026 06:19:39 +0000 (0:00:01.122) 0:36:25.518 ***** 2026-02-19 06:19:56.596353 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596362 | orchestrator | 2026-02-19 06:19:56.596372 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:19:56.596381 | orchestrator | Thursday 19 February 2026 06:19:40 +0000 (0:00:01.113) 0:36:26.631 ***** 2026-02-19 06:19:56.596391 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596400 | orchestrator | 2026-02-19 06:19:56.596410 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:19:56.596419 | orchestrator | Thursday 19 February 2026 06:19:41 +0000 (0:00:01.113) 0:36:27.745 ***** 2026-02-19 06:19:56.596429 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:56.596438 | orchestrator | 2026-02-19 06:19:56.596448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:19:56.596457 | orchestrator | Thursday 19 February 2026 06:19:43 +0000 (0:00:02.446) 0:36:30.191 ***** 2026-02-19 06:19:56.596467 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:56.596476 | orchestrator | 2026-02-19 06:19:56.596486 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:19:56.596495 | orchestrator | Thursday 19 February 2026 06:19:45 +0000 (0:00:01.110) 0:36:31.302 ***** 2026-02-19 06:19:56.596505 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-19 06:19:56.596514 | orchestrator | 2026-02-19 06:19:56.596524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:19:56.596534 | orchestrator | Thursday 19 February 2026 06:19:46 +0000 (0:00:01.153) 0:36:32.456 ***** 2026-02-19 06:19:56.596543 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596552 | orchestrator | 2026-02-19 06:19:56.596562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:19:56.596571 | orchestrator | Thursday 19 February 2026 06:19:47 +0000 (0:00:01.134) 0:36:33.590 ***** 2026-02-19 06:19:56.596581 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596590 | orchestrator | 2026-02-19 06:19:56.596600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:19:56.596609 | orchestrator | Thursday 19 February 2026 06:19:48 +0000 (0:00:01.132) 0:36:34.722 ***** 2026-02-19 06:19:56.596619 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596629 | orchestrator | 2026-02-19 06:19:56.596638 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:19:56.596647 | orchestrator | Thursday 19 February 2026 06:19:49 +0000 (0:00:01.142) 0:36:35.864 ***** 2026-02-19 06:19:56.596657 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596666 | orchestrator | 2026-02-19 06:19:56.596676 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:19:56.596686 | orchestrator | Thursday 19 February 2026 06:19:50 +0000 (0:00:01.200) 0:36:37.065 ***** 2026-02-19 06:19:56.596695 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596704 | orchestrator | 2026-02-19 06:19:56.596714 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:19:56.596723 | orchestrator | Thursday 19 February 2026 06:19:51 +0000 (0:00:01.119) 0:36:38.185 ***** 2026-02-19 06:19:56.596733 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596742 | orchestrator | 2026-02-19 06:19:56.596752 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:19:56.596761 | orchestrator | Thursday 19 February 2026 06:19:53 +0000 (0:00:01.116) 0:36:39.301 ***** 2026-02-19 06:19:56.596771 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596780 | orchestrator | 2026-02-19 06:19:56.596790 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:19:56.596807 | orchestrator | Thursday 19 February 2026 06:19:54 +0000 (0:00:01.186) 0:36:40.488 ***** 2026-02-19 06:19:56.596816 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:19:56.596826 | orchestrator | 2026-02-19 06:19:56.596835 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:19:56.596845 | orchestrator | Thursday 19 February 2026 06:19:55 +0000 (0:00:01.130) 0:36:41.619 ***** 2026-02-19 06:19:56.596854 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:19:56.596864 | orchestrator | 2026-02-19 06:19:56.596874 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:19:56.596896 | orchestrator | Thursday 19 February 2026 06:19:56 +0000 (0:00:01.186) 0:36:42.805 ***** 2026-02-19 06:20:46.770601 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-19 06:20:46.770718 | orchestrator | 2026-02-19 06:20:46.770736 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:20:46.770748 | orchestrator | Thursday 19 February 2026 06:19:57 +0000 (0:00:01.110) 0:36:43.916 ***** 2026-02-19 06:20:46.770759 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-19 06:20:46.770771 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-19 06:20:46.770782 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-19 06:20:46.770801 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-19 06:20:46.770820 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-19 06:20:46.770838 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-19 06:20:46.770857 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-19 06:20:46.770876 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:20:46.770895 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:20:46.770915 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:20:46.770934 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:20:46.770954 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:20:46.770973 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:20:46.771069 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:20:46.771083 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-19 06:20:46.771094 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-19 06:20:46.771105 | orchestrator | 2026-02-19 06:20:46.771117 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:20:46.771128 | orchestrator | Thursday 19 February 2026 06:20:04 +0000 (0:00:06.712) 0:36:50.628 ***** 2026-02-19 06:20:46.771139 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-19 06:20:46.771153 | orchestrator | 2026-02-19 06:20:46.771165 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 06:20:46.771189 | orchestrator | Thursday 19 February 2026 06:20:05 +0000 (0:00:01.514) 0:36:52.143 ***** 2026-02-19 06:20:46.771202 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:20:46.771216 | orchestrator | 2026-02-19 06:20:46.771228 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 06:20:46.771240 | orchestrator | Thursday 19 February 2026 06:20:07 +0000 (0:00:01.527) 0:36:53.670 ***** 2026-02-19 06:20:46.771252 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:20:46.771264 | orchestrator | 2026-02-19 06:20:46.771276 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:20:46.771288 | orchestrator | Thursday 19 February 2026 06:20:09 +0000 (0:00:01.993) 0:36:55.664 ***** 2026-02-19 06:20:46.771301 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771339 | orchestrator | 2026-02-19 06:20:46.771352 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:20:46.771365 | orchestrator | Thursday 19 February 2026 06:20:10 +0000 (0:00:01.144) 0:36:56.808 ***** 2026-02-19 06:20:46.771377 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771390 | orchestrator | 2026-02-19 06:20:46.771402 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:20:46.771414 | orchestrator | Thursday 19 February 2026 06:20:11 +0000 (0:00:01.145) 0:36:57.954 ***** 2026-02-19 06:20:46.771426 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771438 | orchestrator | 2026-02-19 06:20:46.771450 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:20:46.771463 | orchestrator | Thursday 19 February 2026 06:20:12 +0000 (0:00:01.149) 0:36:59.103 ***** 2026-02-19 06:20:46.771475 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771487 | orchestrator | 2026-02-19 06:20:46.771500 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:20:46.771511 | orchestrator | Thursday 19 February 2026 06:20:14 +0000 (0:00:01.134) 0:37:00.238 ***** 2026-02-19 06:20:46.771521 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771532 | orchestrator | 2026-02-19 06:20:46.771544 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:20:46.771564 | orchestrator | Thursday 19 February 2026 06:20:15 +0000 (0:00:01.110) 0:37:01.349 ***** 2026-02-19 06:20:46.771584 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771605 | orchestrator | 2026-02-19 06:20:46.771625 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:20:46.771637 | orchestrator | Thursday 19 February 2026 06:20:16 +0000 (0:00:01.107) 0:37:02.456 ***** 2026-02-19 06:20:46.771648 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771659 | orchestrator | 2026-02-19 06:20:46.771669 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:20:46.771680 | orchestrator | Thursday 19 February 2026 06:20:17 +0000 (0:00:01.109) 0:37:03.566 ***** 2026-02-19 06:20:46.771691 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771702 | orchestrator | 2026-02-19 06:20:46.771712 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:20:46.771723 | orchestrator | Thursday 19 February 2026 06:20:18 +0000 (0:00:01.146) 0:37:04.712 ***** 2026-02-19 06:20:46.771735 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771745 | orchestrator | 2026-02-19 06:20:46.771775 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:20:46.771787 | orchestrator | Thursday 19 February 2026 06:20:19 +0000 (0:00:01.101) 0:37:05.814 ***** 2026-02-19 06:20:46.771798 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.771808 | orchestrator | 2026-02-19 06:20:46.771819 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:20:46.771829 | orchestrator | Thursday 19 February 2026 06:20:20 +0000 (0:00:01.104) 0:37:06.918 ***** 2026-02-19 06:20:46.771840 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:20:46.771851 | orchestrator | 2026-02-19 06:20:46.771862 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:20:46.771872 | orchestrator | Thursday 19 February 2026 06:20:21 +0000 (0:00:01.171) 0:37:08.090 ***** 2026-02-19 06:20:46.771883 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:20:46.771893 | orchestrator | 2026-02-19 06:20:46.771904 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:20:46.771915 | orchestrator | Thursday 19 February 2026 06:20:26 +0000 (0:00:04.556) 0:37:12.646 ***** 2026-02-19 06:20:46.771933 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:20:46.771951 | orchestrator | 2026-02-19 06:20:46.772045 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:20:46.772069 | orchestrator | Thursday 19 February 2026 06:20:27 +0000 (0:00:01.150) 0:37:13.797 ***** 2026-02-19 06:20:46.772099 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-19 06:20:46.772115 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-19 06:20:46.772127 | orchestrator | 2026-02-19 06:20:46.772138 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:20:46.772149 | orchestrator | Thursday 19 February 2026 06:20:35 +0000 (0:00:07.970) 0:37:21.768 ***** 2026-02-19 06:20:46.772160 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.772170 | orchestrator | 2026-02-19 06:20:46.772181 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:20:46.772192 | orchestrator | Thursday 19 February 2026 06:20:36 +0000 (0:00:01.176) 0:37:22.945 ***** 2026-02-19 06:20:46.772202 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.772213 | orchestrator | 2026-02-19 06:20:46.772224 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:20:46.772235 | orchestrator | Thursday 19 February 2026 06:20:37 +0000 (0:00:01.109) 0:37:24.054 ***** 2026-02-19 06:20:46.772245 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.772256 | orchestrator | 2026-02-19 06:20:46.772267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:20:46.772277 | orchestrator | Thursday 19 February 2026 06:20:38 +0000 (0:00:01.161) 0:37:25.216 ***** 2026-02-19 06:20:46.772288 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.772298 | orchestrator | 2026-02-19 06:20:46.772309 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:20:46.772320 | orchestrator | Thursday 19 February 2026 06:20:40 +0000 (0:00:01.141) 0:37:26.358 ***** 2026-02-19 06:20:46.772330 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.772341 | orchestrator | 2026-02-19 06:20:46.772352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:20:46.772363 | orchestrator | Thursday 19 February 2026 06:20:41 +0000 (0:00:01.148) 0:37:27.507 ***** 2026-02-19 06:20:46.772373 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:20:46.772384 | orchestrator | 2026-02-19 06:20:46.772395 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:20:46.772405 | orchestrator | Thursday 19 February 2026 06:20:42 +0000 (0:00:01.235) 0:37:28.742 ***** 2026-02-19 06:20:46.772416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:20:46.772427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:20:46.772438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:20:46.772448 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.772459 | orchestrator | 2026-02-19 06:20:46.772470 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:20:46.772487 | orchestrator | Thursday 19 February 2026 06:20:43 +0000 (0:00:01.389) 0:37:30.132 ***** 2026-02-19 06:20:46.772505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:20:46.772524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:20:46.772542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:20:46.772560 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:20:46.772581 | orchestrator | 2026-02-19 06:20:46.772592 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:20:46.772603 | orchestrator | Thursday 19 February 2026 06:20:45 +0000 (0:00:01.415) 0:37:31.548 ***** 2026-02-19 06:20:46.772614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:20:46.772625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:20:46.772645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:21:46.315255 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.315364 | orchestrator | 2026-02-19 06:21:46.315378 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:21:46.315389 | orchestrator | Thursday 19 February 2026 06:20:46 +0000 (0:00:01.433) 0:37:32.981 ***** 2026-02-19 06:21:46.315398 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.315408 | orchestrator | 2026-02-19 06:21:46.315420 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:21:46.315436 | orchestrator | Thursday 19 February 2026 06:20:47 +0000 (0:00:01.164) 0:37:34.146 ***** 2026-02-19 06:21:46.315451 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 06:21:46.315466 | orchestrator | 2026-02-19 06:21:46.315481 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:21:46.315497 | orchestrator | Thursday 19 February 2026 06:20:49 +0000 (0:00:01.772) 0:37:35.918 ***** 2026-02-19 06:21:46.315512 | orchestrator | changed: [testbed-node-3] 2026-02-19 06:21:46.315526 | orchestrator | 2026-02-19 06:21:46.315539 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-19 06:21:46.315552 | orchestrator | Thursday 19 February 2026 06:20:51 +0000 (0:00:01.724) 0:37:37.643 ***** 2026-02-19 06:21:46.315567 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.315582 | orchestrator | 2026-02-19 06:21:46.315600 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:21:46.315610 | orchestrator | Thursday 19 February 2026 06:20:52 +0000 (0:00:01.104) 0:37:38.747 ***** 2026-02-19 06:21:46.315633 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:21:46.315644 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:21:46.315652 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:21:46.315661 | orchestrator | 2026-02-19 06:21:46.315670 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-19 06:21:46.315679 | orchestrator | Thursday 19 February 2026 06:20:54 +0000 (0:00:01.628) 0:37:40.376 ***** 2026-02-19 06:21:46.315688 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-19 06:21:46.315696 | orchestrator | 2026-02-19 06:21:46.315705 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-19 06:21:46.315714 | orchestrator | Thursday 19 February 2026 06:20:55 +0000 (0:00:01.448) 0:37:41.825 ***** 2026-02-19 06:21:46.315722 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.315731 | orchestrator | 2026-02-19 06:21:46.315740 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-19 06:21:46.315748 | orchestrator | Thursday 19 February 2026 06:20:56 +0000 (0:00:01.119) 0:37:42.945 ***** 2026-02-19 06:21:46.315757 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.315766 | orchestrator | 2026-02-19 06:21:46.315774 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-19 06:21:46.315783 | orchestrator | Thursday 19 February 2026 06:20:57 +0000 (0:00:01.110) 0:37:44.055 ***** 2026-02-19 06:21:46.315792 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.315800 | orchestrator | 2026-02-19 06:21:46.315837 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-19 06:21:46.315853 | orchestrator | Thursday 19 February 2026 06:20:59 +0000 (0:00:01.449) 0:37:45.505 ***** 2026-02-19 06:21:46.315869 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.315904 | orchestrator | 2026-02-19 06:21:46.315913 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-19 06:21:46.315922 | orchestrator | Thursday 19 February 2026 06:21:00 +0000 (0:00:01.119) 0:37:46.624 ***** 2026-02-19 06:21:46.315930 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-19 06:21:46.315941 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-19 06:21:46.315949 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-19 06:21:46.315958 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-19 06:21:46.315966 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-19 06:21:46.315975 | orchestrator | 2026-02-19 06:21:46.315984 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-19 06:21:46.315994 | orchestrator | Thursday 19 February 2026 06:21:03 +0000 (0:00:03.094) 0:37:49.719 ***** 2026-02-19 06:21:46.316004 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.316015 | orchestrator | 2026-02-19 06:21:46.316026 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-19 06:21:46.316036 | orchestrator | Thursday 19 February 2026 06:21:04 +0000 (0:00:01.122) 0:37:50.842 ***** 2026-02-19 06:21:46.316047 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-19 06:21:46.316058 | orchestrator | 2026-02-19 06:21:46.316068 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-19 06:21:46.316079 | orchestrator | Thursday 19 February 2026 06:21:06 +0000 (0:00:01.566) 0:37:52.409 ***** 2026-02-19 06:21:46.316090 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-19 06:21:46.316101 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-19 06:21:46.316111 | orchestrator | 2026-02-19 06:21:46.316122 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-19 06:21:46.316133 | orchestrator | Thursday 19 February 2026 06:21:08 +0000 (0:00:01.817) 0:37:54.227 ***** 2026-02-19 06:21:46.316143 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:21:46.316154 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 06:21:46.316165 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:21:46.316176 | orchestrator | 2026-02-19 06:21:46.316207 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:21:46.316218 | orchestrator | Thursday 19 February 2026 06:21:11 +0000 (0:00:03.288) 0:37:57.515 ***** 2026-02-19 06:21:46.316229 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-19 06:21:46.316240 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 06:21:46.316251 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.316262 | orchestrator | 2026-02-19 06:21:46.316272 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-19 06:21:46.316283 | orchestrator | Thursday 19 February 2026 06:21:13 +0000 (0:00:01.985) 0:37:59.500 ***** 2026-02-19 06:21:46.316294 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.316304 | orchestrator | 2026-02-19 06:21:46.316315 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-19 06:21:46.316326 | orchestrator | Thursday 19 February 2026 06:21:14 +0000 (0:00:01.195) 0:38:00.698 ***** 2026-02-19 06:21:46.316336 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.316347 | orchestrator | 2026-02-19 06:21:46.316358 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-19 06:21:46.316369 | orchestrator | Thursday 19 February 2026 06:21:15 +0000 (0:00:01.186) 0:38:01.884 ***** 2026-02-19 06:21:46.316379 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.316390 | orchestrator | 2026-02-19 06:21:46.316400 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-19 06:21:46.316411 | orchestrator | Thursday 19 February 2026 06:21:16 +0000 (0:00:01.103) 0:38:02.988 ***** 2026-02-19 06:21:46.316447 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-19 06:21:46.316465 | orchestrator | 2026-02-19 06:21:46.316483 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-19 06:21:46.316502 | orchestrator | Thursday 19 February 2026 06:21:18 +0000 (0:00:01.454) 0:38:04.442 ***** 2026-02-19 06:21:46.316521 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.316540 | orchestrator | 2026-02-19 06:21:46.316552 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-19 06:21:46.316563 | orchestrator | Thursday 19 February 2026 06:21:19 +0000 (0:00:01.463) 0:38:05.905 ***** 2026-02-19 06:21:46.316573 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.316584 | orchestrator | 2026-02-19 06:21:46.316595 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-19 06:21:46.316606 | orchestrator | Thursday 19 February 2026 06:21:23 +0000 (0:00:03.696) 0:38:09.601 ***** 2026-02-19 06:21:46.316617 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-19 06:21:46.316628 | orchestrator | 2026-02-19 06:21:46.316638 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-19 06:21:46.316649 | orchestrator | Thursday 19 February 2026 06:21:24 +0000 (0:00:01.436) 0:38:11.038 ***** 2026-02-19 06:21:46.316660 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.316671 | orchestrator | 2026-02-19 06:21:46.316681 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-19 06:21:46.316692 | orchestrator | Thursday 19 February 2026 06:21:26 +0000 (0:00:01.978) 0:38:13.017 ***** 2026-02-19 06:21:46.316702 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.316713 | orchestrator | 2026-02-19 06:21:46.316724 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-19 06:21:46.316734 | orchestrator | Thursday 19 February 2026 06:21:28 +0000 (0:00:01.949) 0:38:14.966 ***** 2026-02-19 06:21:46.316745 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:21:46.316756 | orchestrator | 2026-02-19 06:21:46.316766 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-19 06:21:46.316777 | orchestrator | Thursday 19 February 2026 06:21:30 +0000 (0:00:02.183) 0:38:17.149 ***** 2026-02-19 06:21:46.316788 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.316798 | orchestrator | 2026-02-19 06:21:46.316843 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-19 06:21:46.316855 | orchestrator | Thursday 19 February 2026 06:21:32 +0000 (0:00:01.118) 0:38:18.268 ***** 2026-02-19 06:21:46.316866 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.316877 | orchestrator | 2026-02-19 06:21:46.316888 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-19 06:21:46.316899 | orchestrator | Thursday 19 February 2026 06:21:33 +0000 (0:00:01.112) 0:38:19.381 ***** 2026-02-19 06:21:46.316909 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 06:21:46.316920 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-19 06:21:46.316931 | orchestrator | 2026-02-19 06:21:46.316942 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-19 06:21:46.316952 | orchestrator | Thursday 19 February 2026 06:21:35 +0000 (0:00:01.969) 0:38:21.350 ***** 2026-02-19 06:21:46.316963 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 06:21:46.316974 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-19 06:21:46.316984 | orchestrator | 2026-02-19 06:21:46.316995 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-19 06:21:46.317006 | orchestrator | Thursday 19 February 2026 06:21:37 +0000 (0:00:02.827) 0:38:24.178 ***** 2026-02-19 06:21:46.317016 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-19 06:21:46.317027 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-19 06:21:46.317038 | orchestrator | 2026-02-19 06:21:46.317049 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-19 06:21:46.317060 | orchestrator | Thursday 19 February 2026 06:21:42 +0000 (0:00:04.821) 0:38:28.999 ***** 2026-02-19 06:21:46.317079 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.317090 | orchestrator | 2026-02-19 06:21:46.317100 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-19 06:21:46.317111 | orchestrator | Thursday 19 February 2026 06:21:43 +0000 (0:00:01.166) 0:38:30.166 ***** 2026-02-19 06:21:46.317122 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.317132 | orchestrator | 2026-02-19 06:21:46.317143 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-19 06:21:46.317154 | orchestrator | Thursday 19 February 2026 06:21:45 +0000 (0:00:01.168) 0:38:31.335 ***** 2026-02-19 06:21:46.317165 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:21:46.317175 | orchestrator | 2026-02-19 06:21:46.317194 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-19 06:22:32.748293 | orchestrator | Thursday 19 February 2026 06:21:46 +0000 (0:00:01.190) 0:38:32.526 ***** 2026-02-19 06:22:32.748406 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748424 | orchestrator | 2026-02-19 06:22:32.748438 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-19 06:22:32.748450 | orchestrator | Thursday 19 February 2026 06:21:47 +0000 (0:00:01.078) 0:38:33.604 ***** 2026-02-19 06:22:32.748461 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748472 | orchestrator | 2026-02-19 06:22:32.748483 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-19 06:22:32.748494 | orchestrator | Thursday 19 February 2026 06:21:48 +0000 (0:00:01.035) 0:38:34.639 ***** 2026-02-19 06:22:32.748505 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-19 06:22:32.748517 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-19 06:22:32.748528 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-19 06:22:32.748539 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-19 06:22:32.748566 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:22:32.748578 | orchestrator | 2026-02-19 06:22:32.748589 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 06:22:32.748600 | orchestrator | Thursday 19 February 2026 06:22:02 +0000 (0:00:14.533) 0:38:49.172 ***** 2026-02-19 06:22:32.748611 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748622 | orchestrator | 2026-02-19 06:22:32.748633 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-19 06:22:32.748643 | orchestrator | Thursday 19 February 2026 06:22:04 +0000 (0:00:01.147) 0:38:50.320 ***** 2026-02-19 06:22:32.748654 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748665 | orchestrator | 2026-02-19 06:22:32.748676 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-19 06:22:32.748710 | orchestrator | Thursday 19 February 2026 06:22:05 +0000 (0:00:01.136) 0:38:51.457 ***** 2026-02-19 06:22:32.748731 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748743 | orchestrator | 2026-02-19 06:22:32.748753 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-19 06:22:32.748764 | orchestrator | Thursday 19 February 2026 06:22:06 +0000 (0:00:01.167) 0:38:52.624 ***** 2026-02-19 06:22:32.748775 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748786 | orchestrator | 2026-02-19 06:22:32.748797 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-19 06:22:32.748808 | orchestrator | Thursday 19 February 2026 06:22:07 +0000 (0:00:01.113) 0:38:53.737 ***** 2026-02-19 06:22:32.748819 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748830 | orchestrator | 2026-02-19 06:22:32.748841 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-19 06:22:32.748852 | orchestrator | Thursday 19 February 2026 06:22:08 +0000 (0:00:01.124) 0:38:54.861 ***** 2026-02-19 06:22:32.748887 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748899 | orchestrator | 2026-02-19 06:22:32.748909 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-19 06:22:32.748920 | orchestrator | Thursday 19 February 2026 06:22:09 +0000 (0:00:01.103) 0:38:55.965 ***** 2026-02-19 06:22:32.748931 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:22:32.748941 | orchestrator | 2026-02-19 06:22:32.748952 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-19 06:22:32.748963 | orchestrator | 2026-02-19 06:22:32.748973 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:22:32.748984 | orchestrator | Thursday 19 February 2026 06:22:10 +0000 (0:00:00.931) 0:38:56.896 ***** 2026-02-19 06:22:32.748995 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-19 06:22:32.749005 | orchestrator | 2026-02-19 06:22:32.749016 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:22:32.749027 | orchestrator | Thursday 19 February 2026 06:22:11 +0000 (0:00:01.110) 0:38:58.007 ***** 2026-02-19 06:22:32.749037 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:32.749049 | orchestrator | 2026-02-19 06:22:32.749060 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:22:32.749071 | orchestrator | Thursday 19 February 2026 06:22:13 +0000 (0:00:01.453) 0:38:59.461 ***** 2026-02-19 06:22:32.749081 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:32.749092 | orchestrator | 2026-02-19 06:22:32.749103 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:22:32.749114 | orchestrator | Thursday 19 February 2026 06:22:14 +0000 (0:00:01.106) 0:39:00.567 ***** 2026-02-19 06:22:32.749124 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:32.749135 | orchestrator | 2026-02-19 06:22:32.749146 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:22:32.749157 | orchestrator | Thursday 19 February 2026 06:22:15 +0000 (0:00:01.536) 0:39:02.104 ***** 2026-02-19 06:22:32.749167 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:32.749178 | orchestrator | 2026-02-19 06:22:32.749189 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:22:32.749199 | orchestrator | Thursday 19 February 2026 06:22:17 +0000 (0:00:01.158) 0:39:03.263 ***** 2026-02-19 06:22:32.749210 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:32.749221 | orchestrator | 2026-02-19 06:22:32.749232 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:22:32.749242 | orchestrator | Thursday 19 February 2026 06:22:18 +0000 (0:00:01.186) 0:39:04.449 ***** 2026-02-19 06:22:32.749253 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:32.749264 | orchestrator | 2026-02-19 06:22:32.749274 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:22:32.749285 | orchestrator | Thursday 19 February 2026 06:22:19 +0000 (0:00:01.148) 0:39:05.598 ***** 2026-02-19 06:22:32.749313 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:32.749325 | orchestrator | 2026-02-19 06:22:32.749336 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:22:32.749346 | orchestrator | Thursday 19 February 2026 06:22:20 +0000 (0:00:01.128) 0:39:06.726 ***** 2026-02-19 06:22:32.749357 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:32.749368 | orchestrator | 2026-02-19 06:22:32.749379 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:22:32.749389 | orchestrator | Thursday 19 February 2026 06:22:21 +0000 (0:00:01.090) 0:39:07.817 ***** 2026-02-19 06:22:32.749400 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:22:32.749411 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:22:32.749422 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:22:32.749432 | orchestrator | 2026-02-19 06:22:32.749452 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:22:32.749463 | orchestrator | Thursday 19 February 2026 06:22:23 +0000 (0:00:01.972) 0:39:09.790 ***** 2026-02-19 06:22:32.749473 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:32.749484 | orchestrator | 2026-02-19 06:22:32.749495 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:22:32.749511 | orchestrator | Thursday 19 February 2026 06:22:24 +0000 (0:00:01.236) 0:39:11.026 ***** 2026-02-19 06:22:32.749523 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:22:32.749534 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:22:32.749544 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:22:32.749555 | orchestrator | 2026-02-19 06:22:32.749566 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:22:32.749577 | orchestrator | Thursday 19 February 2026 06:22:28 +0000 (0:00:03.202) 0:39:14.228 ***** 2026-02-19 06:22:32.749587 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 06:22:32.749598 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 06:22:32.749609 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 06:22:32.749620 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:32.749631 | orchestrator | 2026-02-19 06:22:32.749641 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:22:32.749652 | orchestrator | Thursday 19 February 2026 06:22:29 +0000 (0:00:01.717) 0:39:15.946 ***** 2026-02-19 06:22:32.749756 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:22:32.749773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:22:32.749784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:22:32.749795 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:32.749806 | orchestrator | 2026-02-19 06:22:32.749817 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:22:32.749828 | orchestrator | Thursday 19 February 2026 06:22:31 +0000 (0:00:01.872) 0:39:17.818 ***** 2026-02-19 06:22:32.749841 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:32.749856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:32.749867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:32.749891 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:32.749902 | orchestrator | 2026-02-19 06:22:32.749921 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:22:51.220849 | orchestrator | Thursday 19 February 2026 06:22:32 +0000 (0:00:01.143) 0:39:18.962 ***** 2026-02-19 06:22:51.220957 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:22:25.359890', 'end': '2026-02-19 06:22:25.412251', 'delta': '0:00:00.052361', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:22:51.220984 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:22:26.246635', 'end': '2026-02-19 06:22:26.291308', 'delta': '0:00:00.044673', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:22:51.220992 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:22:26.780762', 'end': '2026-02-19 06:22:26.846625', 'delta': '0:00:00.065863', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:22:51.221000 | orchestrator | 2026-02-19 06:22:51.221008 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:22:51.221015 | orchestrator | Thursday 19 February 2026 06:22:33 +0000 (0:00:01.221) 0:39:20.183 ***** 2026-02-19 06:22:51.221022 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:51.221029 | orchestrator | 2026-02-19 06:22:51.221037 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:22:51.221043 | orchestrator | Thursday 19 February 2026 06:22:35 +0000 (0:00:01.249) 0:39:21.433 ***** 2026-02-19 06:22:51.221050 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:51.221058 | orchestrator | 2026-02-19 06:22:51.221065 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:22:51.221071 | orchestrator | Thursday 19 February 2026 06:22:36 +0000 (0:00:01.230) 0:39:22.664 ***** 2026-02-19 06:22:51.221078 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:51.221085 | orchestrator | 2026-02-19 06:22:51.221092 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:22:51.221098 | orchestrator | Thursday 19 February 2026 06:22:37 +0000 (0:00:01.122) 0:39:23.787 ***** 2026-02-19 06:22:51.221105 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:22:51.221129 | orchestrator | 2026-02-19 06:22:51.221136 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:22:51.221143 | orchestrator | Thursday 19 February 2026 06:22:39 +0000 (0:00:02.047) 0:39:25.834 ***** 2026-02-19 06:22:51.221150 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:51.221156 | orchestrator | 2026-02-19 06:22:51.221163 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:22:51.221169 | orchestrator | Thursday 19 February 2026 06:22:40 +0000 (0:00:01.169) 0:39:27.003 ***** 2026-02-19 06:22:51.221176 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:51.221183 | orchestrator | 2026-02-19 06:22:51.221189 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:22:51.221196 | orchestrator | Thursday 19 February 2026 06:22:41 +0000 (0:00:01.103) 0:39:28.108 ***** 2026-02-19 06:22:51.221203 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:51.221209 | orchestrator | 2026-02-19 06:22:51.221216 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:22:51.221222 | orchestrator | Thursday 19 February 2026 06:22:43 +0000 (0:00:01.197) 0:39:29.305 ***** 2026-02-19 06:22:51.221229 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:51.221236 | orchestrator | 2026-02-19 06:22:51.221242 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:22:51.221262 | orchestrator | Thursday 19 February 2026 06:22:44 +0000 (0:00:01.092) 0:39:30.398 ***** 2026-02-19 06:22:51.221269 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:51.221276 | orchestrator | 2026-02-19 06:22:51.221283 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:22:51.221289 | orchestrator | Thursday 19 February 2026 06:22:45 +0000 (0:00:01.103) 0:39:31.502 ***** 2026-02-19 06:22:51.221296 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:51.221302 | orchestrator | 2026-02-19 06:22:51.221309 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:22:51.221315 | orchestrator | Thursday 19 February 2026 06:22:46 +0000 (0:00:01.160) 0:39:32.662 ***** 2026-02-19 06:22:51.221322 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:51.221329 | orchestrator | 2026-02-19 06:22:51.221335 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:22:51.221342 | orchestrator | Thursday 19 February 2026 06:22:47 +0000 (0:00:01.084) 0:39:33.747 ***** 2026-02-19 06:22:51.221349 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:51.221357 | orchestrator | 2026-02-19 06:22:51.221364 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:22:51.221372 | orchestrator | Thursday 19 February 2026 06:22:48 +0000 (0:00:01.171) 0:39:34.918 ***** 2026-02-19 06:22:51.221379 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:51.221386 | orchestrator | 2026-02-19 06:22:51.221397 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:22:51.221405 | orchestrator | Thursday 19 February 2026 06:22:49 +0000 (0:00:01.091) 0:39:36.009 ***** 2026-02-19 06:22:51.221413 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:22:51.221420 | orchestrator | 2026-02-19 06:22:51.221428 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:22:51.221435 | orchestrator | Thursday 19 February 2026 06:22:50 +0000 (0:00:01.196) 0:39:37.206 ***** 2026-02-19 06:22:51.221444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:22:51.221455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'uuids': ['a59715b7-019c-4dda-9336-d3b7804a06c1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf']}})  2026-02-19 06:22:51.221470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85ad02dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:22:51.221480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02']}})  2026-02-19 06:22:51.221493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-20-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:22:52.567252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww', 'dm-uuid-CRYPT-LUKS2-f68538a13fa347dc9b85a13ec62262c1-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'uuids': ['f68538a1-3fa3-47dc-9b85-a13ec62262c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww']}})  2026-02-19 06:22:52.567327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160']}})  2026-02-19 06:22:52.567357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '28e9d7a7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:22:52.567454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf', 'dm-uuid-CRYPT-LUKS2-a59715b7019c4dda9336d3b7804a06c1-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:22:52.567490 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:22:52.567504 | orchestrator | 2026-02-19 06:22:52.567518 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:22:52.567531 | orchestrator | Thursday 19 February 2026 06:22:52 +0000 (0:00:01.347) 0:39:38.553 ***** 2026-02-19 06:22:52.567553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757604 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'uuids': ['a59715b7-019c-4dda-9336-d3b7804a06c1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85ad02dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757806 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757813 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-20-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757855 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757861 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww', 'dm-uuid-CRYPT-LUKS2-f68538a13fa347dc9b85a13ec62262c1-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'uuids': ['f68538a1-3fa3-47dc-9b85-a13ec62262c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:22:53.757888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:23:11.718013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:23:11.718185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '28e9d7a7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:23:11.718203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:23:11.718246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:23:11.718280 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf', 'dm-uuid-CRYPT-LUKS2-a59715b7019c4dda9336d3b7804a06c1-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:23:11.718292 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:11.718305 | orchestrator | 2026-02-19 06:23:11.718317 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:23:11.718328 | orchestrator | Thursday 19 February 2026 06:22:53 +0000 (0:00:01.423) 0:39:39.976 ***** 2026-02-19 06:23:11.718338 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:11.718348 | orchestrator | 2026-02-19 06:23:11.718358 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:23:11.718367 | orchestrator | Thursday 19 February 2026 06:22:55 +0000 (0:00:01.505) 0:39:41.482 ***** 2026-02-19 06:23:11.718377 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:11.718386 | orchestrator | 2026-02-19 06:23:11.718396 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:23:11.718405 | orchestrator | Thursday 19 February 2026 06:22:56 +0000 (0:00:01.173) 0:39:42.655 ***** 2026-02-19 06:23:11.718415 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:11.718424 | orchestrator | 2026-02-19 06:23:11.718434 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:23:11.718443 | orchestrator | Thursday 19 February 2026 06:22:57 +0000 (0:00:01.498) 0:39:44.153 ***** 2026-02-19 06:23:11.718453 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:11.718463 | orchestrator | 2026-02-19 06:23:11.718472 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:23:11.718482 | orchestrator | Thursday 19 February 2026 06:22:59 +0000 (0:00:01.135) 0:39:45.289 ***** 2026-02-19 06:23:11.718491 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:11.718501 | orchestrator | 2026-02-19 06:23:11.718510 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:23:11.718520 | orchestrator | Thursday 19 February 2026 06:23:00 +0000 (0:00:01.226) 0:39:46.515 ***** 2026-02-19 06:23:11.718529 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:11.718539 | orchestrator | 2026-02-19 06:23:11.718549 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:23:11.718560 | orchestrator | Thursday 19 February 2026 06:23:01 +0000 (0:00:01.132) 0:39:47.648 ***** 2026-02-19 06:23:11.718572 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-19 06:23:11.718583 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-19 06:23:11.718624 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-19 06:23:11.718635 | orchestrator | 2026-02-19 06:23:11.718647 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:23:11.718659 | orchestrator | Thursday 19 February 2026 06:23:03 +0000 (0:00:01.892) 0:39:49.540 ***** 2026-02-19 06:23:11.718670 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 06:23:11.718681 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 06:23:11.718692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 06:23:11.718703 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:11.718721 | orchestrator | 2026-02-19 06:23:11.718733 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:23:11.718744 | orchestrator | Thursday 19 February 2026 06:23:04 +0000 (0:00:01.127) 0:39:50.668 ***** 2026-02-19 06:23:11.718755 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-19 06:23:11.718767 | orchestrator | 2026-02-19 06:23:11.718778 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:23:11.718791 | orchestrator | Thursday 19 February 2026 06:23:05 +0000 (0:00:01.255) 0:39:51.924 ***** 2026-02-19 06:23:11.718802 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:11.718813 | orchestrator | 2026-02-19 06:23:11.718824 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:23:11.718834 | orchestrator | Thursday 19 February 2026 06:23:06 +0000 (0:00:01.145) 0:39:53.070 ***** 2026-02-19 06:23:11.718845 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:11.718856 | orchestrator | 2026-02-19 06:23:11.718867 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:23:11.718878 | orchestrator | Thursday 19 February 2026 06:23:07 +0000 (0:00:01.138) 0:39:54.208 ***** 2026-02-19 06:23:11.718888 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:11.718899 | orchestrator | 2026-02-19 06:23:11.718911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:23:11.718922 | orchestrator | Thursday 19 February 2026 06:23:09 +0000 (0:00:01.156) 0:39:55.365 ***** 2026-02-19 06:23:11.718932 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:11.718942 | orchestrator | 2026-02-19 06:23:11.718956 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:23:11.718966 | orchestrator | Thursday 19 February 2026 06:23:10 +0000 (0:00:01.207) 0:39:56.572 ***** 2026-02-19 06:23:11.718983 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:23:51.530944 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:23:51.531061 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:23:51.531077 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.531091 | orchestrator | 2026-02-19 06:23:51.531103 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:23:51.531115 | orchestrator | Thursday 19 February 2026 06:23:11 +0000 (0:00:01.357) 0:39:57.930 ***** 2026-02-19 06:23:51.531127 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:23:51.531137 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:23:51.531148 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:23:51.531159 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.531170 | orchestrator | 2026-02-19 06:23:51.531181 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:23:51.531192 | orchestrator | Thursday 19 February 2026 06:23:13 +0000 (0:00:01.415) 0:39:59.346 ***** 2026-02-19 06:23:51.531203 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:23:51.531213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:23:51.531224 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:23:51.531235 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.531245 | orchestrator | 2026-02-19 06:23:51.531256 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:23:51.531267 | orchestrator | Thursday 19 February 2026 06:23:14 +0000 (0:00:01.357) 0:40:00.703 ***** 2026-02-19 06:23:51.531278 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.531289 | orchestrator | 2026-02-19 06:23:51.531300 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:23:51.531311 | orchestrator | Thursday 19 February 2026 06:23:15 +0000 (0:00:01.135) 0:40:01.839 ***** 2026-02-19 06:23:51.531322 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 06:23:51.531372 | orchestrator | 2026-02-19 06:23:51.531385 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:23:51.531396 | orchestrator | Thursday 19 February 2026 06:23:16 +0000 (0:00:01.324) 0:40:03.163 ***** 2026-02-19 06:23:51.531406 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:23:51.531418 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:23:51.531428 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:23:51.531439 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:23:51.531450 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-19 06:23:51.531460 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:23:51.531471 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:23:51.531483 | orchestrator | 2026-02-19 06:23:51.531523 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:23:51.531537 | orchestrator | Thursday 19 February 2026 06:23:19 +0000 (0:00:02.108) 0:40:05.271 ***** 2026-02-19 06:23:51.531550 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:23:51.531561 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:23:51.531574 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:23:51.531585 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:23:51.531597 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-19 06:23:51.531609 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:23:51.531621 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:23:51.531633 | orchestrator | 2026-02-19 06:23:51.531645 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-19 06:23:51.531656 | orchestrator | Thursday 19 February 2026 06:23:21 +0000 (0:00:02.355) 0:40:07.627 ***** 2026-02-19 06:23:51.531668 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.531681 | orchestrator | 2026-02-19 06:23:51.531694 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-19 06:23:51.531706 | orchestrator | Thursday 19 February 2026 06:23:22 +0000 (0:00:01.149) 0:40:08.776 ***** 2026-02-19 06:23:51.531728 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.531741 | orchestrator | 2026-02-19 06:23:51.531755 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-19 06:23:51.531777 | orchestrator | Thursday 19 February 2026 06:23:23 +0000 (0:00:00.802) 0:40:09.579 ***** 2026-02-19 06:23:51.531797 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.531817 | orchestrator | 2026-02-19 06:23:51.531836 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-19 06:23:51.531858 | orchestrator | Thursday 19 February 2026 06:23:24 +0000 (0:00:00.904) 0:40:10.483 ***** 2026-02-19 06:23:51.531879 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-19 06:23:51.531900 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-19 06:23:51.531920 | orchestrator | 2026-02-19 06:23:51.531932 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:23:51.531943 | orchestrator | Thursday 19 February 2026 06:23:28 +0000 (0:00:03.781) 0:40:14.265 ***** 2026-02-19 06:23:51.531968 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-19 06:23:51.531981 | orchestrator | 2026-02-19 06:23:51.531992 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:23:51.532022 | orchestrator | Thursday 19 February 2026 06:23:29 +0000 (0:00:01.101) 0:40:15.366 ***** 2026-02-19 06:23:51.532044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-19 06:23:51.532055 | orchestrator | 2026-02-19 06:23:51.532066 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:23:51.532077 | orchestrator | Thursday 19 February 2026 06:23:30 +0000 (0:00:01.113) 0:40:16.480 ***** 2026-02-19 06:23:51.532087 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532098 | orchestrator | 2026-02-19 06:23:51.532109 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:23:51.532120 | orchestrator | Thursday 19 February 2026 06:23:31 +0000 (0:00:01.132) 0:40:17.612 ***** 2026-02-19 06:23:51.532130 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532141 | orchestrator | 2026-02-19 06:23:51.532152 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:23:51.532162 | orchestrator | Thursday 19 February 2026 06:23:32 +0000 (0:00:01.584) 0:40:19.197 ***** 2026-02-19 06:23:51.532173 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532184 | orchestrator | 2026-02-19 06:23:51.532195 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:23:51.532205 | orchestrator | Thursday 19 February 2026 06:23:34 +0000 (0:00:01.535) 0:40:20.732 ***** 2026-02-19 06:23:51.532216 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532227 | orchestrator | 2026-02-19 06:23:51.532238 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:23:51.532249 | orchestrator | Thursday 19 February 2026 06:23:36 +0000 (0:00:01.534) 0:40:22.266 ***** 2026-02-19 06:23:51.532259 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532270 | orchestrator | 2026-02-19 06:23:51.532281 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:23:51.532291 | orchestrator | Thursday 19 February 2026 06:23:37 +0000 (0:00:01.592) 0:40:23.859 ***** 2026-02-19 06:23:51.532302 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532313 | orchestrator | 2026-02-19 06:23:51.532323 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:23:51.532334 | orchestrator | Thursday 19 February 2026 06:23:38 +0000 (0:00:01.108) 0:40:24.968 ***** 2026-02-19 06:23:51.532345 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532356 | orchestrator | 2026-02-19 06:23:51.532367 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:23:51.532377 | orchestrator | Thursday 19 February 2026 06:23:39 +0000 (0:00:01.101) 0:40:26.069 ***** 2026-02-19 06:23:51.532388 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532399 | orchestrator | 2026-02-19 06:23:51.532409 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:23:51.532420 | orchestrator | Thursday 19 February 2026 06:23:41 +0000 (0:00:01.526) 0:40:27.596 ***** 2026-02-19 06:23:51.532431 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532441 | orchestrator | 2026-02-19 06:23:51.532452 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:23:51.532463 | orchestrator | Thursday 19 February 2026 06:23:42 +0000 (0:00:01.564) 0:40:29.160 ***** 2026-02-19 06:23:51.532473 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532484 | orchestrator | 2026-02-19 06:23:51.532520 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:23:51.532533 | orchestrator | Thursday 19 February 2026 06:23:43 +0000 (0:00:00.748) 0:40:29.909 ***** 2026-02-19 06:23:51.532544 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532555 | orchestrator | 2026-02-19 06:23:51.532566 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:23:51.532576 | orchestrator | Thursday 19 February 2026 06:23:44 +0000 (0:00:00.759) 0:40:30.668 ***** 2026-02-19 06:23:51.532587 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532598 | orchestrator | 2026-02-19 06:23:51.532609 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:23:51.532619 | orchestrator | Thursday 19 February 2026 06:23:45 +0000 (0:00:00.807) 0:40:31.476 ***** 2026-02-19 06:23:51.532637 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532648 | orchestrator | 2026-02-19 06:23:51.532659 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:23:51.532670 | orchestrator | Thursday 19 February 2026 06:23:46 +0000 (0:00:00.805) 0:40:32.282 ***** 2026-02-19 06:23:51.532681 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532692 | orchestrator | 2026-02-19 06:23:51.532702 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:23:51.532713 | orchestrator | Thursday 19 February 2026 06:23:46 +0000 (0:00:00.748) 0:40:33.031 ***** 2026-02-19 06:23:51.532724 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532735 | orchestrator | 2026-02-19 06:23:51.532745 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:23:51.532756 | orchestrator | Thursday 19 February 2026 06:23:47 +0000 (0:00:00.762) 0:40:33.793 ***** 2026-02-19 06:23:51.532767 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532777 | orchestrator | 2026-02-19 06:23:51.532788 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:23:51.532799 | orchestrator | Thursday 19 February 2026 06:23:48 +0000 (0:00:00.772) 0:40:34.565 ***** 2026-02-19 06:23:51.532810 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:23:51.532821 | orchestrator | 2026-02-19 06:23:51.532838 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:23:51.532856 | orchestrator | Thursday 19 February 2026 06:23:49 +0000 (0:00:00.763) 0:40:35.329 ***** 2026-02-19 06:23:51.532876 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532896 | orchestrator | 2026-02-19 06:23:51.532915 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:23:51.532933 | orchestrator | Thursday 19 February 2026 06:23:49 +0000 (0:00:00.768) 0:40:36.097 ***** 2026-02-19 06:23:51.532950 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:23:51.532961 | orchestrator | 2026-02-19 06:23:51.532979 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:23:51.532990 | orchestrator | Thursday 19 February 2026 06:23:50 +0000 (0:00:00.888) 0:40:36.985 ***** 2026-02-19 06:23:51.533008 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330406 | orchestrator | 2026-02-19 06:24:33.330588 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:24:33.330607 | orchestrator | Thursday 19 February 2026 06:23:51 +0000 (0:00:00.758) 0:40:37.744 ***** 2026-02-19 06:24:33.330620 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330632 | orchestrator | 2026-02-19 06:24:33.330644 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:24:33.330655 | orchestrator | Thursday 19 February 2026 06:23:52 +0000 (0:00:00.787) 0:40:38.531 ***** 2026-02-19 06:24:33.330666 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330676 | orchestrator | 2026-02-19 06:24:33.330687 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:24:33.330698 | orchestrator | Thursday 19 February 2026 06:23:53 +0000 (0:00:00.850) 0:40:39.382 ***** 2026-02-19 06:24:33.330709 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330720 | orchestrator | 2026-02-19 06:24:33.330730 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:24:33.330741 | orchestrator | Thursday 19 February 2026 06:23:53 +0000 (0:00:00.758) 0:40:40.140 ***** 2026-02-19 06:24:33.330752 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330763 | orchestrator | 2026-02-19 06:24:33.330773 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:24:33.330784 | orchestrator | Thursday 19 February 2026 06:23:54 +0000 (0:00:00.765) 0:40:40.906 ***** 2026-02-19 06:24:33.330795 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330806 | orchestrator | 2026-02-19 06:24:33.330817 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:24:33.330828 | orchestrator | Thursday 19 February 2026 06:23:55 +0000 (0:00:00.773) 0:40:41.679 ***** 2026-02-19 06:24:33.330865 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330877 | orchestrator | 2026-02-19 06:24:33.330888 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:24:33.330899 | orchestrator | Thursday 19 February 2026 06:23:56 +0000 (0:00:00.754) 0:40:42.434 ***** 2026-02-19 06:24:33.330910 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330920 | orchestrator | 2026-02-19 06:24:33.330931 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:24:33.330942 | orchestrator | Thursday 19 February 2026 06:23:56 +0000 (0:00:00.749) 0:40:43.184 ***** 2026-02-19 06:24:33.330954 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.330966 | orchestrator | 2026-02-19 06:24:33.330979 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:24:33.330991 | orchestrator | Thursday 19 February 2026 06:23:57 +0000 (0:00:00.764) 0:40:43.948 ***** 2026-02-19 06:24:33.331003 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.331015 | orchestrator | 2026-02-19 06:24:33.331027 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:24:33.331039 | orchestrator | Thursday 19 February 2026 06:23:58 +0000 (0:00:00.771) 0:40:44.720 ***** 2026-02-19 06:24:33.331051 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.331064 | orchestrator | 2026-02-19 06:24:33.331076 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:24:33.331088 | orchestrator | Thursday 19 February 2026 06:23:59 +0000 (0:00:00.771) 0:40:45.492 ***** 2026-02-19 06:24:33.331100 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.331112 | orchestrator | 2026-02-19 06:24:33.331125 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:24:33.331137 | orchestrator | Thursday 19 February 2026 06:24:00 +0000 (0:00:00.865) 0:40:46.357 ***** 2026-02-19 06:24:33.331149 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:24:33.331162 | orchestrator | 2026-02-19 06:24:33.331174 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:24:33.331193 | orchestrator | Thursday 19 February 2026 06:24:01 +0000 (0:00:01.581) 0:40:47.939 ***** 2026-02-19 06:24:33.331212 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:24:33.331229 | orchestrator | 2026-02-19 06:24:33.331247 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:24:33.331264 | orchestrator | Thursday 19 February 2026 06:24:03 +0000 (0:00:01.940) 0:40:49.880 ***** 2026-02-19 06:24:33.331282 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-19 06:24:33.331301 | orchestrator | 2026-02-19 06:24:33.331319 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:24:33.331336 | orchestrator | Thursday 19 February 2026 06:24:04 +0000 (0:00:01.121) 0:40:51.002 ***** 2026-02-19 06:24:33.331355 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.331374 | orchestrator | 2026-02-19 06:24:33.331393 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:24:33.331503 | orchestrator | Thursday 19 February 2026 06:24:05 +0000 (0:00:01.153) 0:40:52.155 ***** 2026-02-19 06:24:33.331519 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.331530 | orchestrator | 2026-02-19 06:24:33.331540 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:24:33.331551 | orchestrator | Thursday 19 February 2026 06:24:07 +0000 (0:00:01.144) 0:40:53.299 ***** 2026-02-19 06:24:33.331562 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:24:33.331573 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:24:33.331584 | orchestrator | 2026-02-19 06:24:33.331594 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:24:33.331605 | orchestrator | Thursday 19 February 2026 06:24:08 +0000 (0:00:01.832) 0:40:55.131 ***** 2026-02-19 06:24:33.331628 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:24:33.331640 | orchestrator | 2026-02-19 06:24:33.331651 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:24:33.331676 | orchestrator | Thursday 19 February 2026 06:24:10 +0000 (0:00:01.429) 0:40:56.561 ***** 2026-02-19 06:24:33.331688 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.331699 | orchestrator | 2026-02-19 06:24:33.331730 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:24:33.331742 | orchestrator | Thursday 19 February 2026 06:24:11 +0000 (0:00:01.141) 0:40:57.702 ***** 2026-02-19 06:24:33.331752 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.331763 | orchestrator | 2026-02-19 06:24:33.331774 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:24:33.331784 | orchestrator | Thursday 19 February 2026 06:24:12 +0000 (0:00:00.782) 0:40:58.485 ***** 2026-02-19 06:24:33.331795 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.331806 | orchestrator | 2026-02-19 06:24:33.331816 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:24:33.331827 | orchestrator | Thursday 19 February 2026 06:24:13 +0000 (0:00:00.742) 0:40:59.227 ***** 2026-02-19 06:24:33.331844 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-19 06:24:33.331863 | orchestrator | 2026-02-19 06:24:33.331881 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:24:33.331899 | orchestrator | Thursday 19 February 2026 06:24:14 +0000 (0:00:01.189) 0:41:00.417 ***** 2026-02-19 06:24:33.331915 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:24:33.331932 | orchestrator | 2026-02-19 06:24:33.331952 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:24:33.331969 | orchestrator | Thursday 19 February 2026 06:24:15 +0000 (0:00:01.712) 0:41:02.130 ***** 2026-02-19 06:24:33.331985 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:24:33.332002 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:24:33.332020 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:24:33.332038 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332058 | orchestrator | 2026-02-19 06:24:33.332076 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:24:33.332092 | orchestrator | Thursday 19 February 2026 06:24:17 +0000 (0:00:01.132) 0:41:03.263 ***** 2026-02-19 06:24:33.332102 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332113 | orchestrator | 2026-02-19 06:24:33.332124 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:24:33.332134 | orchestrator | Thursday 19 February 2026 06:24:18 +0000 (0:00:01.140) 0:41:04.404 ***** 2026-02-19 06:24:33.332145 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332156 | orchestrator | 2026-02-19 06:24:33.332166 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:24:33.332195 | orchestrator | Thursday 19 February 2026 06:24:19 +0000 (0:00:01.153) 0:41:05.557 ***** 2026-02-19 06:24:33.332217 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332228 | orchestrator | 2026-02-19 06:24:33.332238 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:24:33.332249 | orchestrator | Thursday 19 February 2026 06:24:20 +0000 (0:00:01.133) 0:41:06.691 ***** 2026-02-19 06:24:33.332260 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332270 | orchestrator | 2026-02-19 06:24:33.332287 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:24:33.332307 | orchestrator | Thursday 19 February 2026 06:24:21 +0000 (0:00:01.127) 0:41:07.819 ***** 2026-02-19 06:24:33.332327 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332346 | orchestrator | 2026-02-19 06:24:33.332366 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:24:33.332398 | orchestrator | Thursday 19 February 2026 06:24:22 +0000 (0:00:00.806) 0:41:08.626 ***** 2026-02-19 06:24:33.332442 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:24:33.332460 | orchestrator | 2026-02-19 06:24:33.332480 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:24:33.332498 | orchestrator | Thursday 19 February 2026 06:24:24 +0000 (0:00:02.193) 0:41:10.819 ***** 2026-02-19 06:24:33.332516 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:24:33.332628 | orchestrator | 2026-02-19 06:24:33.332643 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:24:33.332653 | orchestrator | Thursday 19 February 2026 06:24:25 +0000 (0:00:00.796) 0:41:11.616 ***** 2026-02-19 06:24:33.332664 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-19 06:24:33.332675 | orchestrator | 2026-02-19 06:24:33.332686 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:24:33.332697 | orchestrator | Thursday 19 February 2026 06:24:26 +0000 (0:00:01.200) 0:41:12.816 ***** 2026-02-19 06:24:33.332707 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332719 | orchestrator | 2026-02-19 06:24:33.332729 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:24:33.332740 | orchestrator | Thursday 19 February 2026 06:24:27 +0000 (0:00:01.116) 0:41:13.933 ***** 2026-02-19 06:24:33.332755 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332775 | orchestrator | 2026-02-19 06:24:33.332794 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:24:33.332812 | orchestrator | Thursday 19 February 2026 06:24:28 +0000 (0:00:01.118) 0:41:15.051 ***** 2026-02-19 06:24:33.332828 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332846 | orchestrator | 2026-02-19 06:24:33.332865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:24:33.332884 | orchestrator | Thursday 19 February 2026 06:24:29 +0000 (0:00:01.115) 0:41:16.166 ***** 2026-02-19 06:24:33.332901 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332917 | orchestrator | 2026-02-19 06:24:33.332935 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:24:33.332956 | orchestrator | Thursday 19 February 2026 06:24:31 +0000 (0:00:01.118) 0:41:17.285 ***** 2026-02-19 06:24:33.332974 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:24:33.332994 | orchestrator | 2026-02-19 06:24:33.333027 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:24:33.333046 | orchestrator | Thursday 19 February 2026 06:24:32 +0000 (0:00:01.123) 0:41:18.408 ***** 2026-02-19 06:24:33.333085 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.339438 | orchestrator | 2026-02-19 06:25:15.339626 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:25:15.339649 | orchestrator | Thursday 19 February 2026 06:24:33 +0000 (0:00:01.136) 0:41:19.545 ***** 2026-02-19 06:25:15.339662 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.339674 | orchestrator | 2026-02-19 06:25:15.339774 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:25:15.339790 | orchestrator | Thursday 19 February 2026 06:24:34 +0000 (0:00:01.114) 0:41:20.660 ***** 2026-02-19 06:25:15.339801 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.339813 | orchestrator | 2026-02-19 06:25:15.339824 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:25:15.339835 | orchestrator | Thursday 19 February 2026 06:24:35 +0000 (0:00:01.112) 0:41:21.772 ***** 2026-02-19 06:25:15.339846 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:25:15.339858 | orchestrator | 2026-02-19 06:25:15.339869 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:25:15.339880 | orchestrator | Thursday 19 February 2026 06:24:36 +0000 (0:00:00.792) 0:41:22.565 ***** 2026-02-19 06:25:15.339891 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-19 06:25:15.339932 | orchestrator | 2026-02-19 06:25:15.339946 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:25:15.339959 | orchestrator | Thursday 19 February 2026 06:24:37 +0000 (0:00:01.094) 0:41:23.660 ***** 2026-02-19 06:25:15.339971 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-19 06:25:15.339984 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-19 06:25:15.339996 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-19 06:25:15.340027 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-19 06:25:15.340039 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-19 06:25:15.340051 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-19 06:25:15.340064 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-19 06:25:15.340077 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:25:15.340089 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:25:15.340101 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:25:15.340113 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:25:15.340125 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:25:15.340137 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:25:15.340148 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:25:15.340161 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-19 06:25:15.340174 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-19 06:25:15.340186 | orchestrator | 2026-02-19 06:25:15.340198 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:25:15.340210 | orchestrator | Thursday 19 February 2026 06:24:44 +0000 (0:00:06.578) 0:41:30.238 ***** 2026-02-19 06:25:15.340223 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-19 06:25:15.340235 | orchestrator | 2026-02-19 06:25:15.340247 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 06:25:15.340259 | orchestrator | Thursday 19 February 2026 06:24:45 +0000 (0:00:01.205) 0:41:31.444 ***** 2026-02-19 06:25:15.340272 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:25:15.340299 | orchestrator | 2026-02-19 06:25:15.340310 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 06:25:15.340321 | orchestrator | Thursday 19 February 2026 06:24:46 +0000 (0:00:01.522) 0:41:32.967 ***** 2026-02-19 06:25:15.340355 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:25:15.340367 | orchestrator | 2026-02-19 06:25:15.340378 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:25:15.340388 | orchestrator | Thursday 19 February 2026 06:24:48 +0000 (0:00:01.621) 0:41:34.589 ***** 2026-02-19 06:25:15.340399 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340410 | orchestrator | 2026-02-19 06:25:15.340421 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:25:15.340432 | orchestrator | Thursday 19 February 2026 06:24:49 +0000 (0:00:00.753) 0:41:35.342 ***** 2026-02-19 06:25:15.340442 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340453 | orchestrator | 2026-02-19 06:25:15.340464 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:25:15.340475 | orchestrator | Thursday 19 February 2026 06:24:49 +0000 (0:00:00.749) 0:41:36.092 ***** 2026-02-19 06:25:15.340486 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340497 | orchestrator | 2026-02-19 06:25:15.340508 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:25:15.340518 | orchestrator | Thursday 19 February 2026 06:24:50 +0000 (0:00:00.810) 0:41:36.902 ***** 2026-02-19 06:25:15.340540 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340551 | orchestrator | 2026-02-19 06:25:15.340562 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:25:15.340572 | orchestrator | Thursday 19 February 2026 06:24:51 +0000 (0:00:00.750) 0:41:37.653 ***** 2026-02-19 06:25:15.340583 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340594 | orchestrator | 2026-02-19 06:25:15.340647 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:25:15.340660 | orchestrator | Thursday 19 February 2026 06:24:52 +0000 (0:00:00.769) 0:41:38.422 ***** 2026-02-19 06:25:15.340693 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340704 | orchestrator | 2026-02-19 06:25:15.340716 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:25:15.340727 | orchestrator | Thursday 19 February 2026 06:24:52 +0000 (0:00:00.763) 0:41:39.185 ***** 2026-02-19 06:25:15.340737 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340748 | orchestrator | 2026-02-19 06:25:15.340759 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:25:15.340770 | orchestrator | Thursday 19 February 2026 06:24:53 +0000 (0:00:00.780) 0:41:39.966 ***** 2026-02-19 06:25:15.340781 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340792 | orchestrator | 2026-02-19 06:25:15.340802 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:25:15.340813 | orchestrator | Thursday 19 February 2026 06:24:54 +0000 (0:00:00.768) 0:41:40.735 ***** 2026-02-19 06:25:15.340824 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340835 | orchestrator | 2026-02-19 06:25:15.340846 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:25:15.340857 | orchestrator | Thursday 19 February 2026 06:24:55 +0000 (0:00:00.808) 0:41:41.543 ***** 2026-02-19 06:25:15.340868 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.340879 | orchestrator | 2026-02-19 06:25:15.340890 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:25:15.340901 | orchestrator | Thursday 19 February 2026 06:24:56 +0000 (0:00:00.769) 0:41:42.313 ***** 2026-02-19 06:25:15.340912 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:25:15.340922 | orchestrator | 2026-02-19 06:25:15.340933 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:25:15.340944 | orchestrator | Thursday 19 February 2026 06:24:56 +0000 (0:00:00.837) 0:41:43.150 ***** 2026-02-19 06:25:15.340955 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:25:15.340965 | orchestrator | 2026-02-19 06:25:15.340976 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:25:15.340987 | orchestrator | Thursday 19 February 2026 06:25:01 +0000 (0:00:04.204) 0:41:47.355 ***** 2026-02-19 06:25:15.340998 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:25:15.341009 | orchestrator | 2026-02-19 06:25:15.341019 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:25:15.341030 | orchestrator | Thursday 19 February 2026 06:25:01 +0000 (0:00:00.804) 0:41:48.159 ***** 2026-02-19 06:25:15.341060 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-19 06:25:15.341076 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-19 06:25:15.341097 | orchestrator | 2026-02-19 06:25:15.341108 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:25:15.341119 | orchestrator | Thursday 19 February 2026 06:25:09 +0000 (0:00:07.595) 0:41:55.755 ***** 2026-02-19 06:25:15.341130 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.341141 | orchestrator | 2026-02-19 06:25:15.341151 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:25:15.341162 | orchestrator | Thursday 19 February 2026 06:25:10 +0000 (0:00:00.773) 0:41:56.528 ***** 2026-02-19 06:25:15.341173 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.341184 | orchestrator | 2026-02-19 06:25:15.341195 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:25:15.341206 | orchestrator | Thursday 19 February 2026 06:25:11 +0000 (0:00:00.749) 0:41:57.278 ***** 2026-02-19 06:25:15.341217 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.341227 | orchestrator | 2026-02-19 06:25:15.341238 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:25:15.341249 | orchestrator | Thursday 19 February 2026 06:25:11 +0000 (0:00:00.772) 0:41:58.050 ***** 2026-02-19 06:25:15.341260 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.341271 | orchestrator | 2026-02-19 06:25:15.341282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:25:15.341293 | orchestrator | Thursday 19 February 2026 06:25:12 +0000 (0:00:00.779) 0:41:58.830 ***** 2026-02-19 06:25:15.341303 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:25:15.341314 | orchestrator | 2026-02-19 06:25:15.341325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:25:15.341355 | orchestrator | Thursday 19 February 2026 06:25:13 +0000 (0:00:00.783) 0:41:59.614 ***** 2026-02-19 06:25:15.341366 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:25:15.341378 | orchestrator | 2026-02-19 06:25:15.341388 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:25:15.341399 | orchestrator | Thursday 19 February 2026 06:25:14 +0000 (0:00:00.857) 0:42:00.471 ***** 2026-02-19 06:25:15.341417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:25:15.341429 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:25:15.341447 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:26:03.677390 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.677568 | orchestrator | 2026-02-19 06:26:03.677599 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:26:03.677618 | orchestrator | Thursday 19 February 2026 06:25:15 +0000 (0:00:01.078) 0:42:01.550 ***** 2026-02-19 06:26:03.677630 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:26:03.677641 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:26:03.677652 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:26:03.677663 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.677674 | orchestrator | 2026-02-19 06:26:03.677686 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:26:03.677697 | orchestrator | Thursday 19 February 2026 06:25:16 +0000 (0:00:01.388) 0:42:02.938 ***** 2026-02-19 06:26:03.677708 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:26:03.677719 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:26:03.677730 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:26:03.677741 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.677752 | orchestrator | 2026-02-19 06:26:03.677764 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:26:03.677777 | orchestrator | Thursday 19 February 2026 06:25:18 +0000 (0:00:01.373) 0:42:04.311 ***** 2026-02-19 06:26:03.677824 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.677872 | orchestrator | 2026-02-19 06:26:03.677893 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:26:03.677912 | orchestrator | Thursday 19 February 2026 06:25:18 +0000 (0:00:00.834) 0:42:05.146 ***** 2026-02-19 06:26:03.677933 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 06:26:03.677952 | orchestrator | 2026-02-19 06:26:03.677973 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:26:03.677988 | orchestrator | Thursday 19 February 2026 06:25:19 +0000 (0:00:00.987) 0:42:06.134 ***** 2026-02-19 06:26:03.678001 | orchestrator | changed: [testbed-node-4] 2026-02-19 06:26:03.678013 | orchestrator | 2026-02-19 06:26:03.678095 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-19 06:26:03.678108 | orchestrator | Thursday 19 February 2026 06:25:21 +0000 (0:00:01.440) 0:42:07.575 ***** 2026-02-19 06:26:03.678120 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.678131 | orchestrator | 2026-02-19 06:26:03.678142 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:26:03.678153 | orchestrator | Thursday 19 February 2026 06:25:22 +0000 (0:00:00.798) 0:42:08.373 ***** 2026-02-19 06:26:03.678169 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:26:03.678194 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:26:03.678222 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:26:03.678241 | orchestrator | 2026-02-19 06:26:03.678287 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-19 06:26:03.678305 | orchestrator | Thursday 19 February 2026 06:25:23 +0000 (0:00:01.295) 0:42:09.669 ***** 2026-02-19 06:26:03.678322 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-19 06:26:03.678339 | orchestrator | 2026-02-19 06:26:03.678357 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-19 06:26:03.678374 | orchestrator | Thursday 19 February 2026 06:25:24 +0000 (0:00:01.158) 0:42:10.827 ***** 2026-02-19 06:26:03.678391 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.678407 | orchestrator | 2026-02-19 06:26:03.678426 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-19 06:26:03.678446 | orchestrator | Thursday 19 February 2026 06:25:25 +0000 (0:00:01.138) 0:42:11.965 ***** 2026-02-19 06:26:03.678463 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.678481 | orchestrator | 2026-02-19 06:26:03.678498 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-19 06:26:03.678517 | orchestrator | Thursday 19 February 2026 06:25:26 +0000 (0:00:01.119) 0:42:13.085 ***** 2026-02-19 06:26:03.678534 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.678553 | orchestrator | 2026-02-19 06:26:03.678572 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-19 06:26:03.678592 | orchestrator | Thursday 19 February 2026 06:25:28 +0000 (0:00:01.476) 0:42:14.562 ***** 2026-02-19 06:26:03.678610 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.678628 | orchestrator | 2026-02-19 06:26:03.678647 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-19 06:26:03.678665 | orchestrator | Thursday 19 February 2026 06:25:29 +0000 (0:00:01.147) 0:42:15.710 ***** 2026-02-19 06:26:03.678685 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-19 06:26:03.678698 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-19 06:26:03.678709 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-19 06:26:03.678720 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-19 06:26:03.678731 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-19 06:26:03.678742 | orchestrator | 2026-02-19 06:26:03.678770 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-19 06:26:03.678782 | orchestrator | Thursday 19 February 2026 06:25:32 +0000 (0:00:02.640) 0:42:18.351 ***** 2026-02-19 06:26:03.678792 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.678803 | orchestrator | 2026-02-19 06:26:03.678829 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-19 06:26:03.678840 | orchestrator | Thursday 19 February 2026 06:25:32 +0000 (0:00:00.780) 0:42:19.132 ***** 2026-02-19 06:26:03.678874 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-19 06:26:03.678886 | orchestrator | 2026-02-19 06:26:03.678897 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-19 06:26:03.678922 | orchestrator | Thursday 19 February 2026 06:25:33 +0000 (0:00:01.085) 0:42:20.218 ***** 2026-02-19 06:26:03.678933 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-19 06:26:03.678944 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-19 06:26:03.678954 | orchestrator | 2026-02-19 06:26:03.678970 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-19 06:26:03.679007 | orchestrator | Thursday 19 February 2026 06:25:35 +0000 (0:00:01.848) 0:42:22.066 ***** 2026-02-19 06:26:03.679033 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:26:03.679049 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-19 06:26:03.679067 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:26:03.679083 | orchestrator | 2026-02-19 06:26:03.679100 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:26:03.679118 | orchestrator | Thursday 19 February 2026 06:25:39 +0000 (0:00:03.347) 0:42:25.414 ***** 2026-02-19 06:26:03.679136 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-19 06:26:03.679154 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-19 06:26:03.679173 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.679192 | orchestrator | 2026-02-19 06:26:03.679210 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-19 06:26:03.679226 | orchestrator | Thursday 19 February 2026 06:25:40 +0000 (0:00:01.642) 0:42:27.057 ***** 2026-02-19 06:26:03.679237 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.679306 | orchestrator | 2026-02-19 06:26:03.679320 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-19 06:26:03.679331 | orchestrator | Thursday 19 February 2026 06:25:41 +0000 (0:00:00.853) 0:42:27.910 ***** 2026-02-19 06:26:03.679341 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.679352 | orchestrator | 2026-02-19 06:26:03.679363 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-19 06:26:03.679374 | orchestrator | Thursday 19 February 2026 06:25:42 +0000 (0:00:00.771) 0:42:28.682 ***** 2026-02-19 06:26:03.679384 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.679395 | orchestrator | 2026-02-19 06:26:03.679406 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-19 06:26:03.679417 | orchestrator | Thursday 19 February 2026 06:25:43 +0000 (0:00:00.782) 0:42:29.464 ***** 2026-02-19 06:26:03.679435 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-19 06:26:03.679454 | orchestrator | 2026-02-19 06:26:03.679473 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-19 06:26:03.679493 | orchestrator | Thursday 19 February 2026 06:25:44 +0000 (0:00:01.091) 0:42:30.556 ***** 2026-02-19 06:26:03.679513 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.679534 | orchestrator | 2026-02-19 06:26:03.679554 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-19 06:26:03.679574 | orchestrator | Thursday 19 February 2026 06:25:45 +0000 (0:00:01.508) 0:42:32.065 ***** 2026-02-19 06:26:03.679593 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.679604 | orchestrator | 2026-02-19 06:26:03.679615 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-19 06:26:03.679638 | orchestrator | Thursday 19 February 2026 06:25:49 +0000 (0:00:03.490) 0:42:35.556 ***** 2026-02-19 06:26:03.679648 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-19 06:26:03.679659 | orchestrator | 2026-02-19 06:26:03.679671 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-19 06:26:03.679681 | orchestrator | Thursday 19 February 2026 06:25:50 +0000 (0:00:01.192) 0:42:36.748 ***** 2026-02-19 06:26:03.679692 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.679703 | orchestrator | 2026-02-19 06:26:03.679714 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-19 06:26:03.679724 | orchestrator | Thursday 19 February 2026 06:25:52 +0000 (0:00:01.962) 0:42:38.711 ***** 2026-02-19 06:26:03.679735 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.679746 | orchestrator | 2026-02-19 06:26:03.679756 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-19 06:26:03.679767 | orchestrator | Thursday 19 February 2026 06:25:54 +0000 (0:00:01.910) 0:42:40.622 ***** 2026-02-19 06:26:03.679778 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:26:03.679788 | orchestrator | 2026-02-19 06:26:03.679802 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-19 06:26:03.679821 | orchestrator | Thursday 19 February 2026 06:25:56 +0000 (0:00:02.294) 0:42:42.916 ***** 2026-02-19 06:26:03.679838 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.679856 | orchestrator | 2026-02-19 06:26:03.679875 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-19 06:26:03.679893 | orchestrator | Thursday 19 February 2026 06:25:57 +0000 (0:00:01.107) 0:42:44.024 ***** 2026-02-19 06:26:03.679913 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:03.679931 | orchestrator | 2026-02-19 06:26:03.679950 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-19 06:26:03.679962 | orchestrator | Thursday 19 February 2026 06:25:58 +0000 (0:00:01.106) 0:42:45.131 ***** 2026-02-19 06:26:03.679972 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-19 06:26:03.679983 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-19 06:26:03.679994 | orchestrator | 2026-02-19 06:26:03.680005 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-19 06:26:03.680015 | orchestrator | Thursday 19 February 2026 06:26:00 +0000 (0:00:01.868) 0:42:46.999 ***** 2026-02-19 06:26:03.680034 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-19 06:26:03.680046 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-19 06:26:03.680056 | orchestrator | 2026-02-19 06:26:03.680067 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-19 06:26:03.680088 | orchestrator | Thursday 19 February 2026 06:26:03 +0000 (0:00:02.891) 0:42:49.891 ***** 2026-02-19 06:26:55.177438 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-19 06:26:55.177568 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-19 06:26:55.177586 | orchestrator | 2026-02-19 06:26:55.177598 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-19 06:26:55.177609 | orchestrator | Thursday 19 February 2026 06:26:08 +0000 (0:00:04.443) 0:42:54.335 ***** 2026-02-19 06:26:55.177619 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177629 | orchestrator | 2026-02-19 06:26:55.177637 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-19 06:26:55.177646 | orchestrator | Thursday 19 February 2026 06:26:08 +0000 (0:00:00.852) 0:42:55.188 ***** 2026-02-19 06:26:55.177655 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177665 | orchestrator | 2026-02-19 06:26:55.177674 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-19 06:26:55.177685 | orchestrator | Thursday 19 February 2026 06:26:09 +0000 (0:00:00.899) 0:42:56.087 ***** 2026-02-19 06:26:55.177693 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177702 | orchestrator | 2026-02-19 06:26:55.177711 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-19 06:26:55.177749 | orchestrator | Thursday 19 February 2026 06:26:10 +0000 (0:00:00.935) 0:42:57.023 ***** 2026-02-19 06:26:55.177759 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177768 | orchestrator | 2026-02-19 06:26:55.177777 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-19 06:26:55.177805 | orchestrator | Thursday 19 February 2026 06:26:11 +0000 (0:00:00.788) 0:42:57.811 ***** 2026-02-19 06:26:55.177816 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177825 | orchestrator | 2026-02-19 06:26:55.177833 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-19 06:26:55.177842 | orchestrator | Thursday 19 February 2026 06:26:12 +0000 (0:00:00.803) 0:42:58.615 ***** 2026-02-19 06:26:55.177852 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-19 06:26:55.177863 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-19 06:26:55.177872 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-19 06:26:55.177882 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-19 06:26:55.177890 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:26:55.177901 | orchestrator | 2026-02-19 06:26:55.177907 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 06:26:55.177912 | orchestrator | Thursday 19 February 2026 06:26:26 +0000 (0:00:14.268) 0:43:12.883 ***** 2026-02-19 06:26:55.177918 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177923 | orchestrator | 2026-02-19 06:26:55.177929 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-19 06:26:55.177934 | orchestrator | Thursday 19 February 2026 06:26:27 +0000 (0:00:00.765) 0:43:13.649 ***** 2026-02-19 06:26:55.177940 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177945 | orchestrator | 2026-02-19 06:26:55.177951 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-19 06:26:55.177957 | orchestrator | Thursday 19 February 2026 06:26:28 +0000 (0:00:00.796) 0:43:14.445 ***** 2026-02-19 06:26:55.177964 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177969 | orchestrator | 2026-02-19 06:26:55.177975 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-19 06:26:55.177981 | orchestrator | Thursday 19 February 2026 06:26:28 +0000 (0:00:00.747) 0:43:15.192 ***** 2026-02-19 06:26:55.177987 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.177993 | orchestrator | 2026-02-19 06:26:55.177999 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-19 06:26:55.178005 | orchestrator | Thursday 19 February 2026 06:26:29 +0000 (0:00:00.805) 0:43:15.998 ***** 2026-02-19 06:26:55.178011 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.178065 | orchestrator | 2026-02-19 06:26:55.178075 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-19 06:26:55.178099 | orchestrator | Thursday 19 February 2026 06:26:30 +0000 (0:00:00.758) 0:43:16.756 ***** 2026-02-19 06:26:55.178110 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.178120 | orchestrator | 2026-02-19 06:26:55.178129 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-19 06:26:55.178138 | orchestrator | Thursday 19 February 2026 06:26:31 +0000 (0:00:00.782) 0:43:17.539 ***** 2026-02-19 06:26:55.178147 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:26:55.178157 | orchestrator | 2026-02-19 06:26:55.178195 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-19 06:26:55.178204 | orchestrator | 2026-02-19 06:26:55.178213 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:26:55.178221 | orchestrator | Thursday 19 February 2026 06:26:32 +0000 (0:00:00.951) 0:43:18.490 ***** 2026-02-19 06:26:55.178253 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-19 06:26:55.178262 | orchestrator | 2026-02-19 06:26:55.178272 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:26:55.178281 | orchestrator | Thursday 19 February 2026 06:26:33 +0000 (0:00:01.254) 0:43:19.744 ***** 2026-02-19 06:26:55.178291 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:26:55.178301 | orchestrator | 2026-02-19 06:26:55.178311 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:26:55.178335 | orchestrator | Thursday 19 February 2026 06:26:35 +0000 (0:00:01.491) 0:43:21.236 ***** 2026-02-19 06:26:55.178341 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:26:55.178346 | orchestrator | 2026-02-19 06:26:55.178352 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:26:55.178357 | orchestrator | Thursday 19 February 2026 06:26:36 +0000 (0:00:01.149) 0:43:22.386 ***** 2026-02-19 06:26:55.178381 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:26:55.178387 | orchestrator | 2026-02-19 06:26:55.178392 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:26:55.178398 | orchestrator | Thursday 19 February 2026 06:26:37 +0000 (0:00:01.437) 0:43:23.824 ***** 2026-02-19 06:26:55.178403 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:26:55.178408 | orchestrator | 2026-02-19 06:26:55.178414 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:26:55.178420 | orchestrator | Thursday 19 February 2026 06:26:38 +0000 (0:00:01.123) 0:43:24.948 ***** 2026-02-19 06:26:55.178425 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:26:55.178431 | orchestrator | 2026-02-19 06:26:55.178436 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:26:55.178442 | orchestrator | Thursday 19 February 2026 06:26:39 +0000 (0:00:01.124) 0:43:26.073 ***** 2026-02-19 06:26:55.178451 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:26:55.178459 | orchestrator | 2026-02-19 06:26:55.178464 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:26:55.178471 | orchestrator | Thursday 19 February 2026 06:26:41 +0000 (0:00:01.168) 0:43:27.241 ***** 2026-02-19 06:26:55.178476 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:26:55.178482 | orchestrator | 2026-02-19 06:26:55.178487 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:26:55.178492 | orchestrator | Thursday 19 February 2026 06:26:42 +0000 (0:00:01.100) 0:43:28.342 ***** 2026-02-19 06:26:55.178498 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:26:55.178503 | orchestrator | 2026-02-19 06:26:55.178510 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:26:55.178519 | orchestrator | Thursday 19 February 2026 06:26:43 +0000 (0:00:01.128) 0:43:29.470 ***** 2026-02-19 06:26:55.178525 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:26:55.178530 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:26:55.178536 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:26:55.178541 | orchestrator | 2026-02-19 06:26:55.178546 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:26:55.178552 | orchestrator | Thursday 19 February 2026 06:26:45 +0000 (0:00:01.933) 0:43:31.403 ***** 2026-02-19 06:26:55.178557 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:26:55.178562 | orchestrator | 2026-02-19 06:26:55.178568 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:26:55.178573 | orchestrator | Thursday 19 February 2026 06:26:46 +0000 (0:00:01.278) 0:43:32.682 ***** 2026-02-19 06:26:55.178578 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:26:55.178584 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:26:55.178589 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:26:55.178601 | orchestrator | 2026-02-19 06:26:55.178606 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:26:55.178612 | orchestrator | Thursday 19 February 2026 06:26:50 +0000 (0:00:04.238) 0:43:36.921 ***** 2026-02-19 06:26:55.178617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 06:26:55.178623 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 06:26:55.178628 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 06:26:55.178634 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:26:55.178643 | orchestrator | 2026-02-19 06:26:55.178652 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:26:55.178660 | orchestrator | Thursday 19 February 2026 06:26:52 +0000 (0:00:01.732) 0:43:38.653 ***** 2026-02-19 06:26:55.178672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:26:55.178684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:26:55.178694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:26:55.178702 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:26:55.178711 | orchestrator | 2026-02-19 06:26:55.178720 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:26:55.178729 | orchestrator | Thursday 19 February 2026 06:26:54 +0000 (0:00:01.582) 0:43:40.235 ***** 2026-02-19 06:26:55.178747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:26:55.178769 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:13.626990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:13.627096 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:13.627109 | orchestrator | 2026-02-19 06:27:13.627118 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:27:13.627127 | orchestrator | Thursday 19 February 2026 06:26:55 +0000 (0:00:01.153) 0:43:41.388 ***** 2026-02-19 06:27:13.627135 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:26:47.322623', 'end': '2026-02-19 06:26:48.379004', 'delta': '0:00:01.056381', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:27:13.627204 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:26:48.942305', 'end': '2026-02-19 06:26:48.996772', 'delta': '0:00:00.054467', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:27:13.627214 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:26:49.486322', 'end': '2026-02-19 06:26:49.537138', 'delta': '0:00:00.050816', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:27:13.627221 | orchestrator | 2026-02-19 06:27:13.627228 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:27:13.627235 | orchestrator | Thursday 19 February 2026 06:26:56 +0000 (0:00:01.209) 0:43:42.598 ***** 2026-02-19 06:27:13.627242 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:13.627249 | orchestrator | 2026-02-19 06:27:13.627256 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:27:13.627262 | orchestrator | Thursday 19 February 2026 06:26:57 +0000 (0:00:01.240) 0:43:43.839 ***** 2026-02-19 06:27:13.627269 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:13.627276 | orchestrator | 2026-02-19 06:27:13.627283 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:27:13.627289 | orchestrator | Thursday 19 February 2026 06:26:58 +0000 (0:00:01.254) 0:43:45.093 ***** 2026-02-19 06:27:13.627296 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:13.627303 | orchestrator | 2026-02-19 06:27:13.627322 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:27:13.627329 | orchestrator | Thursday 19 February 2026 06:26:59 +0000 (0:00:01.132) 0:43:46.225 ***** 2026-02-19 06:27:13.627336 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:27:13.627342 | orchestrator | 2026-02-19 06:27:13.627349 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:27:13.627355 | orchestrator | Thursday 19 February 2026 06:27:01 +0000 (0:00:01.999) 0:43:48.225 ***** 2026-02-19 06:27:13.627362 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:13.627369 | orchestrator | 2026-02-19 06:27:13.627376 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:27:13.627382 | orchestrator | Thursday 19 February 2026 06:27:03 +0000 (0:00:01.108) 0:43:49.334 ***** 2026-02-19 06:27:13.627403 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:13.627426 | orchestrator | 2026-02-19 06:27:13.627433 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:27:13.627440 | orchestrator | Thursday 19 February 2026 06:27:04 +0000 (0:00:01.099) 0:43:50.433 ***** 2026-02-19 06:27:13.627453 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:13.627460 | orchestrator | 2026-02-19 06:27:13.627467 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:27:13.627474 | orchestrator | Thursday 19 February 2026 06:27:05 +0000 (0:00:01.219) 0:43:51.652 ***** 2026-02-19 06:27:13.627481 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:13.627488 | orchestrator | 2026-02-19 06:27:13.627494 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:27:13.627501 | orchestrator | Thursday 19 February 2026 06:27:06 +0000 (0:00:01.110) 0:43:52.763 ***** 2026-02-19 06:27:13.627508 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:13.627514 | orchestrator | 2026-02-19 06:27:13.627521 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:27:13.627529 | orchestrator | Thursday 19 February 2026 06:27:07 +0000 (0:00:01.116) 0:43:53.879 ***** 2026-02-19 06:27:13.627537 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:13.627545 | orchestrator | 2026-02-19 06:27:13.627552 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:27:13.627560 | orchestrator | Thursday 19 February 2026 06:27:08 +0000 (0:00:01.200) 0:43:55.080 ***** 2026-02-19 06:27:13.627568 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:13.627575 | orchestrator | 2026-02-19 06:27:13.627583 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:27:13.627590 | orchestrator | Thursday 19 February 2026 06:27:09 +0000 (0:00:01.119) 0:43:56.199 ***** 2026-02-19 06:27:13.627597 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:13.627605 | orchestrator | 2026-02-19 06:27:13.627612 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:27:13.627620 | orchestrator | Thursday 19 February 2026 06:27:11 +0000 (0:00:01.194) 0:43:57.394 ***** 2026-02-19 06:27:13.627628 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:13.627635 | orchestrator | 2026-02-19 06:27:13.627643 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:27:13.627651 | orchestrator | Thursday 19 February 2026 06:27:12 +0000 (0:00:01.094) 0:43:58.488 ***** 2026-02-19 06:27:13.627659 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:13.627667 | orchestrator | 2026-02-19 06:27:13.627679 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:27:13.627691 | orchestrator | Thursday 19 February 2026 06:27:13 +0000 (0:00:01.144) 0:43:59.633 ***** 2026-02-19 06:27:13.627702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:27:13.627713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'uuids': ['ca7295e3-b0e7-43de-a68b-3daf29557592'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2']}})  2026-02-19 06:27:13.627735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '74afed04', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:27:13.627767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210']}})  2026-02-19 06:27:14.746123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:27:14.746260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:27:14.746279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:27:14.746294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:27:14.746306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM', 'dm-uuid-CRYPT-LUKS2-0386b2e9039d452a9d925bb7d9e8a516-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:27:14.746318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:27:14.746369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'uuids': ['0386b2e9-039d-452a-9d92-5bb7d9e8a516'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM']}})  2026-02-19 06:27:14.746401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456']}})  2026-02-19 06:27:14.746414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:27:14.746429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b283ac38', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:27:14.746452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:27:14.746534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:27:14.746570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2', 'dm-uuid-CRYPT-LUKS2-ca7295e3b0e743dea68b3daf29557592-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:27:14.953980 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:14.954215 | orchestrator | 2026-02-19 06:27:14.954237 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:27:14.954250 | orchestrator | Thursday 19 February 2026 06:27:14 +0000 (0:00:01.330) 0:44:00.964 ***** 2026-02-19 06:27:14.954265 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'uuids': ['ca7295e3-b0e7-43de-a68b-3daf29557592'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '74afed04', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954348 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954384 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954428 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954454 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954466 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM', 'dm-uuid-CRYPT-LUKS2-0386b2e9039d452a9d925bb7d9e8a516-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954495 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:14.954517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'uuids': ['0386b2e9-039d-452a-9d92-5bb7d9e8a516'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:27.922513 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:27.922611 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:27.922635 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b283ac38', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:27.922670 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:27.922678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:27.922685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2', 'dm-uuid-CRYPT-LUKS2-ca7295e3b0e743dea68b3daf29557592-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:27:27.922698 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:27.922705 | orchestrator | 2026-02-19 06:27:27.922713 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:27:27.922720 | orchestrator | Thursday 19 February 2026 06:27:16 +0000 (0:00:01.380) 0:44:02.345 ***** 2026-02-19 06:27:27.922726 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:27.922732 | orchestrator | 2026-02-19 06:27:27.922738 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:27:27.922744 | orchestrator | Thursday 19 February 2026 06:27:17 +0000 (0:00:01.437) 0:44:03.782 ***** 2026-02-19 06:27:27.922750 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:27.922756 | orchestrator | 2026-02-19 06:27:27.922761 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:27:27.922767 | orchestrator | Thursday 19 February 2026 06:27:18 +0000 (0:00:01.142) 0:44:04.924 ***** 2026-02-19 06:27:27.922773 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:27:27.922779 | orchestrator | 2026-02-19 06:27:27.922784 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:27:27.922790 | orchestrator | Thursday 19 February 2026 06:27:20 +0000 (0:00:01.470) 0:44:06.395 ***** 2026-02-19 06:27:27.922796 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:27.922802 | orchestrator | 2026-02-19 06:27:27.922808 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:27:27.922815 | orchestrator | Thursday 19 February 2026 06:27:21 +0000 (0:00:01.104) 0:44:07.500 ***** 2026-02-19 06:27:27.922824 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:27.922833 | orchestrator | 2026-02-19 06:27:27.922843 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:27:27.922857 | orchestrator | Thursday 19 February 2026 06:27:22 +0000 (0:00:01.228) 0:44:08.728 ***** 2026-02-19 06:27:27.922868 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:27.922877 | orchestrator | 2026-02-19 06:27:27.922888 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:27:27.922898 | orchestrator | Thursday 19 February 2026 06:27:23 +0000 (0:00:01.121) 0:44:09.850 ***** 2026-02-19 06:27:27.922908 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-19 06:27:27.922919 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-19 06:27:27.922930 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-19 06:27:27.922939 | orchestrator | 2026-02-19 06:27:27.922949 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:27:27.922958 | orchestrator | Thursday 19 February 2026 06:27:25 +0000 (0:00:01.982) 0:44:11.833 ***** 2026-02-19 06:27:27.922969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 06:27:27.922979 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 06:27:27.922990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 06:27:27.923000 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:27:27.923011 | orchestrator | 2026-02-19 06:27:27.923017 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:27:27.923023 | orchestrator | Thursday 19 February 2026 06:27:26 +0000 (0:00:01.193) 0:44:13.027 ***** 2026-02-19 06:27:27.923029 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-19 06:27:27.923036 | orchestrator | 2026-02-19 06:27:27.923049 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:28:10.197035 | orchestrator | Thursday 19 February 2026 06:27:27 +0000 (0:00:01.108) 0:44:14.136 ***** 2026-02-19 06:28:10.197259 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.197278 | orchestrator | 2026-02-19 06:28:10.197291 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:28:10.197325 | orchestrator | Thursday 19 February 2026 06:27:29 +0000 (0:00:01.110) 0:44:15.247 ***** 2026-02-19 06:28:10.197337 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.197348 | orchestrator | 2026-02-19 06:28:10.197359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:28:10.197370 | orchestrator | Thursday 19 February 2026 06:27:30 +0000 (0:00:01.132) 0:44:16.379 ***** 2026-02-19 06:28:10.197381 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.197392 | orchestrator | 2026-02-19 06:28:10.197402 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:28:10.197413 | orchestrator | Thursday 19 February 2026 06:27:31 +0000 (0:00:01.155) 0:44:17.534 ***** 2026-02-19 06:28:10.197424 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.197436 | orchestrator | 2026-02-19 06:28:10.197446 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:28:10.197457 | orchestrator | Thursday 19 February 2026 06:27:32 +0000 (0:00:01.200) 0:44:18.735 ***** 2026-02-19 06:28:10.197468 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:28:10.197479 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:28:10.197489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:28:10.197500 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.197510 | orchestrator | 2026-02-19 06:28:10.197521 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:28:10.197532 | orchestrator | Thursday 19 February 2026 06:27:33 +0000 (0:00:01.419) 0:44:20.154 ***** 2026-02-19 06:28:10.197543 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:28:10.197554 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:28:10.197565 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:28:10.197577 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.197589 | orchestrator | 2026-02-19 06:28:10.197602 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:28:10.197614 | orchestrator | Thursday 19 February 2026 06:27:35 +0000 (0:00:01.419) 0:44:21.574 ***** 2026-02-19 06:28:10.197625 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:28:10.197638 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:28:10.197650 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:28:10.197662 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.197673 | orchestrator | 2026-02-19 06:28:10.197685 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:28:10.197697 | orchestrator | Thursday 19 February 2026 06:27:36 +0000 (0:00:01.411) 0:44:22.985 ***** 2026-02-19 06:28:10.197709 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.197722 | orchestrator | 2026-02-19 06:28:10.197734 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:28:10.197746 | orchestrator | Thursday 19 February 2026 06:27:37 +0000 (0:00:01.136) 0:44:24.122 ***** 2026-02-19 06:28:10.197758 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 06:28:10.197770 | orchestrator | 2026-02-19 06:28:10.197781 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:28:10.197793 | orchestrator | Thursday 19 February 2026 06:27:39 +0000 (0:00:01.621) 0:44:25.743 ***** 2026-02-19 06:28:10.197805 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:28:10.197818 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:28:10.197830 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:28:10.197842 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:28:10.197868 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:28:10.197937 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-19 06:28:10.197950 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:28:10.197963 | orchestrator | 2026-02-19 06:28:10.197974 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:28:10.197985 | orchestrator | Thursday 19 February 2026 06:27:41 +0000 (0:00:02.143) 0:44:27.887 ***** 2026-02-19 06:28:10.197996 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:28:10.198007 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:28:10.198112 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:28:10.198133 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:28:10.198154 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:28:10.198174 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-19 06:28:10.198193 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:28:10.198208 | orchestrator | 2026-02-19 06:28:10.198219 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-19 06:28:10.198230 | orchestrator | Thursday 19 February 2026 06:27:44 +0000 (0:00:02.476) 0:44:30.363 ***** 2026-02-19 06:28:10.198241 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.198251 | orchestrator | 2026-02-19 06:28:10.198262 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-19 06:28:10.198292 | orchestrator | Thursday 19 February 2026 06:27:45 +0000 (0:00:01.137) 0:44:31.500 ***** 2026-02-19 06:28:10.198303 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.198314 | orchestrator | 2026-02-19 06:28:10.198325 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-19 06:28:10.198335 | orchestrator | Thursday 19 February 2026 06:27:46 +0000 (0:00:00.778) 0:44:32.279 ***** 2026-02-19 06:28:10.198346 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.198357 | orchestrator | 2026-02-19 06:28:10.198367 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-19 06:28:10.198378 | orchestrator | Thursday 19 February 2026 06:27:46 +0000 (0:00:00.848) 0:44:33.127 ***** 2026-02-19 06:28:10.198389 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-19 06:28:10.198400 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-19 06:28:10.198411 | orchestrator | 2026-02-19 06:28:10.198421 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:28:10.198432 | orchestrator | Thursday 19 February 2026 06:27:50 +0000 (0:00:03.843) 0:44:36.970 ***** 2026-02-19 06:28:10.198442 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-19 06:28:10.198454 | orchestrator | 2026-02-19 06:28:10.198465 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:28:10.198475 | orchestrator | Thursday 19 February 2026 06:27:51 +0000 (0:00:01.148) 0:44:38.119 ***** 2026-02-19 06:28:10.198486 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-19 06:28:10.198497 | orchestrator | 2026-02-19 06:28:10.198507 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:28:10.198518 | orchestrator | Thursday 19 February 2026 06:27:53 +0000 (0:00:01.119) 0:44:39.239 ***** 2026-02-19 06:28:10.198529 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.198539 | orchestrator | 2026-02-19 06:28:10.198550 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:28:10.198561 | orchestrator | Thursday 19 February 2026 06:27:54 +0000 (0:00:01.090) 0:44:40.329 ***** 2026-02-19 06:28:10.198571 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.198582 | orchestrator | 2026-02-19 06:28:10.198593 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:28:10.198615 | orchestrator | Thursday 19 February 2026 06:27:55 +0000 (0:00:01.511) 0:44:41.840 ***** 2026-02-19 06:28:10.198626 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.198637 | orchestrator | 2026-02-19 06:28:10.198648 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:28:10.198658 | orchestrator | Thursday 19 February 2026 06:27:57 +0000 (0:00:01.550) 0:44:43.391 ***** 2026-02-19 06:28:10.198669 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.198680 | orchestrator | 2026-02-19 06:28:10.198690 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:28:10.198701 | orchestrator | Thursday 19 February 2026 06:27:59 +0000 (0:00:01.907) 0:44:45.299 ***** 2026-02-19 06:28:10.198711 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.198722 | orchestrator | 2026-02-19 06:28:10.198733 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:28:10.198744 | orchestrator | Thursday 19 February 2026 06:28:00 +0000 (0:00:01.107) 0:44:46.407 ***** 2026-02-19 06:28:10.198754 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.198765 | orchestrator | 2026-02-19 06:28:10.198775 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:28:10.198786 | orchestrator | Thursday 19 February 2026 06:28:01 +0000 (0:00:01.137) 0:44:47.545 ***** 2026-02-19 06:28:10.198797 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.198807 | orchestrator | 2026-02-19 06:28:10.198818 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:28:10.198828 | orchestrator | Thursday 19 February 2026 06:28:02 +0000 (0:00:01.137) 0:44:48.683 ***** 2026-02-19 06:28:10.198839 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.198850 | orchestrator | 2026-02-19 06:28:10.198860 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:28:10.198871 | orchestrator | Thursday 19 February 2026 06:28:04 +0000 (0:00:01.550) 0:44:50.233 ***** 2026-02-19 06:28:10.198888 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.198900 | orchestrator | 2026-02-19 06:28:10.198910 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:28:10.198921 | orchestrator | Thursday 19 February 2026 06:28:05 +0000 (0:00:01.516) 0:44:51.749 ***** 2026-02-19 06:28:10.198931 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.198942 | orchestrator | 2026-02-19 06:28:10.198952 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:28:10.198963 | orchestrator | Thursday 19 February 2026 06:28:06 +0000 (0:00:00.759) 0:44:52.509 ***** 2026-02-19 06:28:10.198974 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.198984 | orchestrator | 2026-02-19 06:28:10.198995 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:28:10.199005 | orchestrator | Thursday 19 February 2026 06:28:07 +0000 (0:00:00.763) 0:44:53.273 ***** 2026-02-19 06:28:10.199016 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.199027 | orchestrator | 2026-02-19 06:28:10.199037 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:28:10.199048 | orchestrator | Thursday 19 February 2026 06:28:07 +0000 (0:00:00.771) 0:44:54.045 ***** 2026-02-19 06:28:10.199059 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.199117 | orchestrator | 2026-02-19 06:28:10.199129 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:28:10.199140 | orchestrator | Thursday 19 February 2026 06:28:08 +0000 (0:00:00.792) 0:44:54.837 ***** 2026-02-19 06:28:10.199150 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:10.199161 | orchestrator | 2026-02-19 06:28:10.199172 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:28:10.199183 | orchestrator | Thursday 19 February 2026 06:28:09 +0000 (0:00:00.772) 0:44:55.609 ***** 2026-02-19 06:28:10.199194 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:10.199205 | orchestrator | 2026-02-19 06:28:10.199224 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:28:49.994934 | orchestrator | Thursday 19 February 2026 06:28:10 +0000 (0:00:00.799) 0:44:56.408 ***** 2026-02-19 06:28:49.995081 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995097 | orchestrator | 2026-02-19 06:28:49.995108 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:28:49.995118 | orchestrator | Thursday 19 February 2026 06:28:10 +0000 (0:00:00.788) 0:44:57.197 ***** 2026-02-19 06:28:49.995126 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995135 | orchestrator | 2026-02-19 06:28:49.995144 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:28:49.995153 | orchestrator | Thursday 19 February 2026 06:28:11 +0000 (0:00:00.773) 0:44:57.971 ***** 2026-02-19 06:28:49.995162 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:49.995172 | orchestrator | 2026-02-19 06:28:49.995181 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:28:49.995189 | orchestrator | Thursday 19 February 2026 06:28:12 +0000 (0:00:00.781) 0:44:58.752 ***** 2026-02-19 06:28:49.995198 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:49.995210 | orchestrator | 2026-02-19 06:28:49.995225 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:28:49.995240 | orchestrator | Thursday 19 February 2026 06:28:13 +0000 (0:00:00.806) 0:44:59.559 ***** 2026-02-19 06:28:49.995255 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995268 | orchestrator | 2026-02-19 06:28:49.995282 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:28:49.995296 | orchestrator | Thursday 19 February 2026 06:28:14 +0000 (0:00:00.754) 0:45:00.313 ***** 2026-02-19 06:28:49.995310 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995325 | orchestrator | 2026-02-19 06:28:49.995339 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:28:49.995353 | orchestrator | Thursday 19 February 2026 06:28:14 +0000 (0:00:00.759) 0:45:01.073 ***** 2026-02-19 06:28:49.995367 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995381 | orchestrator | 2026-02-19 06:28:49.995396 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:28:49.995410 | orchestrator | Thursday 19 February 2026 06:28:15 +0000 (0:00:00.777) 0:45:01.850 ***** 2026-02-19 06:28:49.995425 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995440 | orchestrator | 2026-02-19 06:28:49.995455 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:28:49.995465 | orchestrator | Thursday 19 February 2026 06:28:16 +0000 (0:00:00.780) 0:45:02.631 ***** 2026-02-19 06:28:49.995474 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995485 | orchestrator | 2026-02-19 06:28:49.995495 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:28:49.995505 | orchestrator | Thursday 19 February 2026 06:28:17 +0000 (0:00:00.744) 0:45:03.376 ***** 2026-02-19 06:28:49.995515 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995525 | orchestrator | 2026-02-19 06:28:49.995536 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:28:49.995547 | orchestrator | Thursday 19 February 2026 06:28:17 +0000 (0:00:00.770) 0:45:04.146 ***** 2026-02-19 06:28:49.995556 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995567 | orchestrator | 2026-02-19 06:28:49.995577 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:28:49.995587 | orchestrator | Thursday 19 February 2026 06:28:18 +0000 (0:00:00.755) 0:45:04.902 ***** 2026-02-19 06:28:49.995619 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995628 | orchestrator | 2026-02-19 06:28:49.995636 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:28:49.995645 | orchestrator | Thursday 19 February 2026 06:28:19 +0000 (0:00:00.763) 0:45:05.665 ***** 2026-02-19 06:28:49.995654 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995662 | orchestrator | 2026-02-19 06:28:49.995671 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:28:49.995704 | orchestrator | Thursday 19 February 2026 06:28:20 +0000 (0:00:00.796) 0:45:06.462 ***** 2026-02-19 06:28:49.995713 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995722 | orchestrator | 2026-02-19 06:28:49.995744 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:28:49.995753 | orchestrator | Thursday 19 February 2026 06:28:21 +0000 (0:00:00.813) 0:45:07.275 ***** 2026-02-19 06:28:49.995762 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995770 | orchestrator | 2026-02-19 06:28:49.995779 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:28:49.995787 | orchestrator | Thursday 19 February 2026 06:28:21 +0000 (0:00:00.762) 0:45:08.038 ***** 2026-02-19 06:28:49.995796 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995804 | orchestrator | 2026-02-19 06:28:49.995813 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:28:49.995822 | orchestrator | Thursday 19 February 2026 06:28:22 +0000 (0:00:00.777) 0:45:08.816 ***** 2026-02-19 06:28:49.995830 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:49.995839 | orchestrator | 2026-02-19 06:28:49.995848 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:28:49.995856 | orchestrator | Thursday 19 February 2026 06:28:24 +0000 (0:00:01.649) 0:45:10.466 ***** 2026-02-19 06:28:49.995865 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:49.995874 | orchestrator | 2026-02-19 06:28:49.995882 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:28:49.995891 | orchestrator | Thursday 19 February 2026 06:28:26 +0000 (0:00:01.909) 0:45:12.377 ***** 2026-02-19 06:28:49.995900 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-19 06:28:49.995910 | orchestrator | 2026-02-19 06:28:49.995919 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:28:49.995930 | orchestrator | Thursday 19 February 2026 06:28:27 +0000 (0:00:01.088) 0:45:13.465 ***** 2026-02-19 06:28:49.995945 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.995959 | orchestrator | 2026-02-19 06:28:49.995974 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:28:49.996010 | orchestrator | Thursday 19 February 2026 06:28:28 +0000 (0:00:01.115) 0:45:14.580 ***** 2026-02-19 06:28:49.996046 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996060 | orchestrator | 2026-02-19 06:28:49.996074 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:28:49.996089 | orchestrator | Thursday 19 February 2026 06:28:29 +0000 (0:00:01.115) 0:45:15.696 ***** 2026-02-19 06:28:49.996104 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:28:49.996118 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:28:49.996133 | orchestrator | 2026-02-19 06:28:49.996147 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:28:49.996166 | orchestrator | Thursday 19 February 2026 06:28:31 +0000 (0:00:01.802) 0:45:17.499 ***** 2026-02-19 06:28:49.996185 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:49.996198 | orchestrator | 2026-02-19 06:28:49.996211 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:28:49.996224 | orchestrator | Thursday 19 February 2026 06:28:32 +0000 (0:00:01.416) 0:45:18.916 ***** 2026-02-19 06:28:49.996238 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996251 | orchestrator | 2026-02-19 06:28:49.996265 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:28:49.996278 | orchestrator | Thursday 19 February 2026 06:28:33 +0000 (0:00:01.121) 0:45:20.037 ***** 2026-02-19 06:28:49.996292 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996307 | orchestrator | 2026-02-19 06:28:49.996319 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:28:49.996347 | orchestrator | Thursday 19 February 2026 06:28:34 +0000 (0:00:00.792) 0:45:20.829 ***** 2026-02-19 06:28:49.996362 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996377 | orchestrator | 2026-02-19 06:28:49.996392 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:28:49.996404 | orchestrator | Thursday 19 February 2026 06:28:35 +0000 (0:00:00.776) 0:45:21.606 ***** 2026-02-19 06:28:49.996413 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-19 06:28:49.996421 | orchestrator | 2026-02-19 06:28:49.996430 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:28:49.996438 | orchestrator | Thursday 19 February 2026 06:28:36 +0000 (0:00:01.186) 0:45:22.793 ***** 2026-02-19 06:28:49.996447 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:49.996456 | orchestrator | 2026-02-19 06:28:49.996464 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:28:49.996473 | orchestrator | Thursday 19 February 2026 06:28:38 +0000 (0:00:01.743) 0:45:24.537 ***** 2026-02-19 06:28:49.996481 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:28:49.996490 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:28:49.996499 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:28:49.996507 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996516 | orchestrator | 2026-02-19 06:28:49.996525 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:28:49.996533 | orchestrator | Thursday 19 February 2026 06:28:39 +0000 (0:00:01.120) 0:45:25.657 ***** 2026-02-19 06:28:49.996542 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996550 | orchestrator | 2026-02-19 06:28:49.996559 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:28:49.996567 | orchestrator | Thursday 19 February 2026 06:28:40 +0000 (0:00:01.136) 0:45:26.794 ***** 2026-02-19 06:28:49.996576 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996584 | orchestrator | 2026-02-19 06:28:49.996593 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:28:49.996602 | orchestrator | Thursday 19 February 2026 06:28:41 +0000 (0:00:01.131) 0:45:27.925 ***** 2026-02-19 06:28:49.996610 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996619 | orchestrator | 2026-02-19 06:28:49.996635 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:28:49.996650 | orchestrator | Thursday 19 February 2026 06:28:42 +0000 (0:00:01.099) 0:45:29.024 ***** 2026-02-19 06:28:49.996664 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996677 | orchestrator | 2026-02-19 06:28:49.996690 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:28:49.996704 | orchestrator | Thursday 19 February 2026 06:28:43 +0000 (0:00:01.109) 0:45:30.134 ***** 2026-02-19 06:28:49.996716 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996729 | orchestrator | 2026-02-19 06:28:49.996743 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:28:49.996756 | orchestrator | Thursday 19 February 2026 06:28:44 +0000 (0:00:00.781) 0:45:30.916 ***** 2026-02-19 06:28:49.996769 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:49.996783 | orchestrator | 2026-02-19 06:28:49.996797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:28:49.996812 | orchestrator | Thursday 19 February 2026 06:28:46 +0000 (0:00:02.199) 0:45:33.116 ***** 2026-02-19 06:28:49.996826 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:28:49.996842 | orchestrator | 2026-02-19 06:28:49.996857 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:28:49.996872 | orchestrator | Thursday 19 February 2026 06:28:47 +0000 (0:00:00.777) 0:45:33.894 ***** 2026-02-19 06:28:49.996886 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-19 06:28:49.996912 | orchestrator | 2026-02-19 06:28:49.996926 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:28:49.996941 | orchestrator | Thursday 19 February 2026 06:28:48 +0000 (0:00:01.193) 0:45:35.088 ***** 2026-02-19 06:28:49.996956 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:28:49.996972 | orchestrator | 2026-02-19 06:28:49.996988 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:28:49.997059 | orchestrator | Thursday 19 February 2026 06:28:49 +0000 (0:00:01.121) 0:45:36.209 ***** 2026-02-19 06:29:34.028582 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.028700 | orchestrator | 2026-02-19 06:29:34.028718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:29:34.028731 | orchestrator | Thursday 19 February 2026 06:28:51 +0000 (0:00:01.107) 0:45:37.317 ***** 2026-02-19 06:29:34.028742 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.028754 | orchestrator | 2026-02-19 06:29:34.028765 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:29:34.028776 | orchestrator | Thursday 19 February 2026 06:28:52 +0000 (0:00:01.109) 0:45:38.426 ***** 2026-02-19 06:29:34.028787 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.028798 | orchestrator | 2026-02-19 06:29:34.028809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:29:34.028819 | orchestrator | Thursday 19 February 2026 06:28:53 +0000 (0:00:01.121) 0:45:39.547 ***** 2026-02-19 06:29:34.028830 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.028841 | orchestrator | 2026-02-19 06:29:34.028852 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:29:34.028863 | orchestrator | Thursday 19 February 2026 06:28:54 +0000 (0:00:01.115) 0:45:40.662 ***** 2026-02-19 06:29:34.028873 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.028884 | orchestrator | 2026-02-19 06:29:34.028895 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:29:34.028906 | orchestrator | Thursday 19 February 2026 06:28:55 +0000 (0:00:01.128) 0:45:41.791 ***** 2026-02-19 06:29:34.028916 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.028927 | orchestrator | 2026-02-19 06:29:34.028942 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:29:34.029024 | orchestrator | Thursday 19 February 2026 06:28:56 +0000 (0:00:01.162) 0:45:42.953 ***** 2026-02-19 06:29:34.029048 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.029067 | orchestrator | 2026-02-19 06:29:34.029086 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:29:34.029097 | orchestrator | Thursday 19 February 2026 06:28:57 +0000 (0:00:01.124) 0:45:44.078 ***** 2026-02-19 06:29:34.029108 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:29:34.029120 | orchestrator | 2026-02-19 06:29:34.029132 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:29:34.029145 | orchestrator | Thursday 19 February 2026 06:28:58 +0000 (0:00:00.794) 0:45:44.872 ***** 2026-02-19 06:29:34.029157 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-19 06:29:34.029171 | orchestrator | 2026-02-19 06:29:34.029183 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:29:34.029195 | orchestrator | Thursday 19 February 2026 06:28:59 +0000 (0:00:01.110) 0:45:45.983 ***** 2026-02-19 06:29:34.029208 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-19 06:29:34.029221 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-19 06:29:34.029233 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-19 06:29:34.029247 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-19 06:29:34.029260 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-19 06:29:34.029272 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-19 06:29:34.029285 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-19 06:29:34.029326 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:29:34.029339 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:29:34.029352 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:29:34.029365 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:29:34.029377 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:29:34.029389 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:29:34.029416 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:29:34.029429 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-19 06:29:34.029442 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-19 06:29:34.029455 | orchestrator | 2026-02-19 06:29:34.029467 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:29:34.029480 | orchestrator | Thursday 19 February 2026 06:29:06 +0000 (0:00:06.444) 0:45:52.427 ***** 2026-02-19 06:29:34.029492 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-19 06:29:34.029506 | orchestrator | 2026-02-19 06:29:34.029519 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 06:29:34.029529 | orchestrator | Thursday 19 February 2026 06:29:07 +0000 (0:00:01.124) 0:45:53.551 ***** 2026-02-19 06:29:34.029540 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:29:34.029553 | orchestrator | 2026-02-19 06:29:34.029563 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 06:29:34.029574 | orchestrator | Thursday 19 February 2026 06:29:08 +0000 (0:00:01.501) 0:45:55.053 ***** 2026-02-19 06:29:34.029585 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:29:34.029595 | orchestrator | 2026-02-19 06:29:34.029606 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:29:34.029617 | orchestrator | Thursday 19 February 2026 06:29:10 +0000 (0:00:01.627) 0:45:56.681 ***** 2026-02-19 06:29:34.029627 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.029638 | orchestrator | 2026-02-19 06:29:34.029649 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:29:34.029678 | orchestrator | Thursday 19 February 2026 06:29:11 +0000 (0:00:00.784) 0:45:57.465 ***** 2026-02-19 06:29:34.029690 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.029701 | orchestrator | 2026-02-19 06:29:34.029712 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:29:34.029722 | orchestrator | Thursday 19 February 2026 06:29:12 +0000 (0:00:00.774) 0:45:58.239 ***** 2026-02-19 06:29:34.029733 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.029744 | orchestrator | 2026-02-19 06:29:34.029754 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:29:34.029765 | orchestrator | Thursday 19 February 2026 06:29:12 +0000 (0:00:00.757) 0:45:58.997 ***** 2026-02-19 06:29:34.029776 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.029786 | orchestrator | 2026-02-19 06:29:34.029797 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:29:34.029808 | orchestrator | Thursday 19 February 2026 06:29:13 +0000 (0:00:00.751) 0:45:59.748 ***** 2026-02-19 06:29:34.029818 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.029829 | orchestrator | 2026-02-19 06:29:34.029840 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:29:34.029851 | orchestrator | Thursday 19 February 2026 06:29:14 +0000 (0:00:00.789) 0:46:00.538 ***** 2026-02-19 06:29:34.029861 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.029872 | orchestrator | 2026-02-19 06:29:34.029883 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:29:34.029903 | orchestrator | Thursday 19 February 2026 06:29:15 +0000 (0:00:00.778) 0:46:01.317 ***** 2026-02-19 06:29:34.029914 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.029925 | orchestrator | 2026-02-19 06:29:34.029936 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:29:34.029947 | orchestrator | Thursday 19 February 2026 06:29:15 +0000 (0:00:00.816) 0:46:02.133 ***** 2026-02-19 06:29:34.029957 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.030099 | orchestrator | 2026-02-19 06:29:34.030111 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:29:34.030122 | orchestrator | Thursday 19 February 2026 06:29:16 +0000 (0:00:00.812) 0:46:02.945 ***** 2026-02-19 06:29:34.030133 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.030143 | orchestrator | 2026-02-19 06:29:34.030154 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:29:34.030165 | orchestrator | Thursday 19 February 2026 06:29:17 +0000 (0:00:00.784) 0:46:03.730 ***** 2026-02-19 06:29:34.030175 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.030186 | orchestrator | 2026-02-19 06:29:34.030197 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:29:34.030207 | orchestrator | Thursday 19 February 2026 06:29:18 +0000 (0:00:00.762) 0:46:04.493 ***** 2026-02-19 06:29:34.030218 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:29:34.030229 | orchestrator | 2026-02-19 06:29:34.030240 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:29:34.030250 | orchestrator | Thursday 19 February 2026 06:29:19 +0000 (0:00:00.863) 0:46:05.356 ***** 2026-02-19 06:29:34.030261 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:29:34.030272 | orchestrator | 2026-02-19 06:29:34.030282 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:29:34.030293 | orchestrator | Thursday 19 February 2026 06:29:23 +0000 (0:00:04.127) 0:46:09.484 ***** 2026-02-19 06:29:34.030304 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:29:34.030314 | orchestrator | 2026-02-19 06:29:34.030325 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:29:34.030336 | orchestrator | Thursday 19 February 2026 06:29:24 +0000 (0:00:00.837) 0:46:10.321 ***** 2026-02-19 06:29:34.030355 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-19 06:29:34.030371 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-19 06:29:34.030383 | orchestrator | 2026-02-19 06:29:34.030394 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:29:34.030405 | orchestrator | Thursday 19 February 2026 06:29:31 +0000 (0:00:07.557) 0:46:17.878 ***** 2026-02-19 06:29:34.030415 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.030426 | orchestrator | 2026-02-19 06:29:34.030437 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:29:34.030447 | orchestrator | Thursday 19 February 2026 06:29:32 +0000 (0:00:00.771) 0:46:18.650 ***** 2026-02-19 06:29:34.030458 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.030469 | orchestrator | 2026-02-19 06:29:34.030480 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:29:34.030500 | orchestrator | Thursday 19 February 2026 06:29:33 +0000 (0:00:00.785) 0:46:19.436 ***** 2026-02-19 06:29:34.030511 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:29:34.030522 | orchestrator | 2026-02-19 06:29:34.030533 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:29:34.030552 | orchestrator | Thursday 19 February 2026 06:29:34 +0000 (0:00:00.806) 0:46:20.243 ***** 2026-02-19 06:30:20.950152 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.950268 | orchestrator | 2026-02-19 06:30:20.950279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:30:20.950288 | orchestrator | Thursday 19 February 2026 06:29:34 +0000 (0:00:00.791) 0:46:21.035 ***** 2026-02-19 06:30:20.950296 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.950304 | orchestrator | 2026-02-19 06:30:20.950311 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:30:20.950318 | orchestrator | Thursday 19 February 2026 06:29:35 +0000 (0:00:00.801) 0:46:21.836 ***** 2026-02-19 06:30:20.950326 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.950334 | orchestrator | 2026-02-19 06:30:20.950342 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:30:20.950349 | orchestrator | Thursday 19 February 2026 06:29:36 +0000 (0:00:00.870) 0:46:22.707 ***** 2026-02-19 06:30:20.950357 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:30:20.950365 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:30:20.950372 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:30:20.950380 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.950387 | orchestrator | 2026-02-19 06:30:20.950394 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:30:20.950402 | orchestrator | Thursday 19 February 2026 06:29:37 +0000 (0:00:01.374) 0:46:24.081 ***** 2026-02-19 06:30:20.950411 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:30:20.950418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:30:20.950440 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:30:20.950449 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.950456 | orchestrator | 2026-02-19 06:30:20.950464 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:30:20.950471 | orchestrator | Thursday 19 February 2026 06:29:39 +0000 (0:00:01.378) 0:46:25.460 ***** 2026-02-19 06:30:20.950479 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:30:20.950486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:30:20.950493 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:30:20.950501 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.950508 | orchestrator | 2026-02-19 06:30:20.950516 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:30:20.950523 | orchestrator | Thursday 19 February 2026 06:29:40 +0000 (0:00:01.079) 0:46:26.540 ***** 2026-02-19 06:30:20.950531 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.950539 | orchestrator | 2026-02-19 06:30:20.950547 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:30:20.950555 | orchestrator | Thursday 19 February 2026 06:29:41 +0000 (0:00:00.808) 0:46:27.348 ***** 2026-02-19 06:30:20.950564 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 06:30:20.950572 | orchestrator | 2026-02-19 06:30:20.950580 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:30:20.950588 | orchestrator | Thursday 19 February 2026 06:29:42 +0000 (0:00:01.010) 0:46:28.359 ***** 2026-02-19 06:30:20.950596 | orchestrator | changed: [testbed-node-5] 2026-02-19 06:30:20.950603 | orchestrator | 2026-02-19 06:30:20.950611 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-19 06:30:20.950618 | orchestrator | Thursday 19 February 2026 06:29:43 +0000 (0:00:01.454) 0:46:29.814 ***** 2026-02-19 06:30:20.950649 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.950659 | orchestrator | 2026-02-19 06:30:20.950668 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-19 06:30:20.950676 | orchestrator | Thursday 19 February 2026 06:29:44 +0000 (0:00:00.791) 0:46:30.605 ***** 2026-02-19 06:30:20.950685 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:30:20.950708 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:30:20.950717 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:30:20.950725 | orchestrator | 2026-02-19 06:30:20.950734 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-19 06:30:20.950742 | orchestrator | Thursday 19 February 2026 06:29:46 +0000 (0:00:01.630) 0:46:32.236 ***** 2026-02-19 06:30:20.950751 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-19 06:30:20.950759 | orchestrator | 2026-02-19 06:30:20.950768 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-19 06:30:20.950775 | orchestrator | Thursday 19 February 2026 06:29:47 +0000 (0:00:01.104) 0:46:33.340 ***** 2026-02-19 06:30:20.950782 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.950790 | orchestrator | 2026-02-19 06:30:20.950797 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-19 06:30:20.950804 | orchestrator | Thursday 19 February 2026 06:29:48 +0000 (0:00:01.125) 0:46:34.466 ***** 2026-02-19 06:30:20.950812 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.950820 | orchestrator | 2026-02-19 06:30:20.950828 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-19 06:30:20.950836 | orchestrator | Thursday 19 February 2026 06:29:49 +0000 (0:00:01.160) 0:46:35.626 ***** 2026-02-19 06:30:20.950845 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.950853 | orchestrator | 2026-02-19 06:30:20.950861 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-19 06:30:20.950869 | orchestrator | Thursday 19 February 2026 06:29:50 +0000 (0:00:01.491) 0:46:37.118 ***** 2026-02-19 06:30:20.950877 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.950886 | orchestrator | 2026-02-19 06:30:20.950893 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-19 06:30:20.950901 | orchestrator | Thursday 19 February 2026 06:29:52 +0000 (0:00:01.525) 0:46:38.643 ***** 2026-02-19 06:30:20.950981 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-19 06:30:20.950996 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-19 06:30:20.951006 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-19 06:30:20.951015 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-19 06:30:20.951024 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-19 06:30:20.951033 | orchestrator | 2026-02-19 06:30:20.951042 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-19 06:30:20.951051 | orchestrator | Thursday 19 February 2026 06:29:54 +0000 (0:00:02.560) 0:46:41.203 ***** 2026-02-19 06:30:20.951061 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.951070 | orchestrator | 2026-02-19 06:30:20.951078 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-19 06:30:20.951086 | orchestrator | Thursday 19 February 2026 06:29:55 +0000 (0:00:00.795) 0:46:41.999 ***** 2026-02-19 06:30:20.951094 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-19 06:30:20.951102 | orchestrator | 2026-02-19 06:30:20.951110 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-19 06:30:20.951117 | orchestrator | Thursday 19 February 2026 06:29:56 +0000 (0:00:01.135) 0:46:43.135 ***** 2026-02-19 06:30:20.951137 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-19 06:30:20.951145 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-19 06:30:20.951153 | orchestrator | 2026-02-19 06:30:20.951161 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-19 06:30:20.951168 | orchestrator | Thursday 19 February 2026 06:29:58 +0000 (0:00:01.852) 0:46:44.988 ***** 2026-02-19 06:30:20.951176 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:30:20.951184 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 06:30:20.951192 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:30:20.951199 | orchestrator | 2026-02-19 06:30:20.951207 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:30:20.951214 | orchestrator | Thursday 19 February 2026 06:30:02 +0000 (0:00:03.352) 0:46:48.341 ***** 2026-02-19 06:30:20.951221 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-19 06:30:20.951229 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 06:30:20.951237 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.951244 | orchestrator | 2026-02-19 06:30:20.951252 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-19 06:30:20.951259 | orchestrator | Thursday 19 February 2026 06:30:03 +0000 (0:00:01.702) 0:46:50.044 ***** 2026-02-19 06:30:20.951267 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.951275 | orchestrator | 2026-02-19 06:30:20.951282 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-19 06:30:20.951290 | orchestrator | Thursday 19 February 2026 06:30:04 +0000 (0:00:00.888) 0:46:50.932 ***** 2026-02-19 06:30:20.951297 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.951305 | orchestrator | 2026-02-19 06:30:20.951312 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-19 06:30:20.951319 | orchestrator | Thursday 19 February 2026 06:30:05 +0000 (0:00:00.776) 0:46:51.708 ***** 2026-02-19 06:30:20.951327 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.951334 | orchestrator | 2026-02-19 06:30:20.951342 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-19 06:30:20.951349 | orchestrator | Thursday 19 February 2026 06:30:06 +0000 (0:00:00.789) 0:46:52.498 ***** 2026-02-19 06:30:20.951357 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-19 06:30:20.951364 | orchestrator | 2026-02-19 06:30:20.951380 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-19 06:30:20.951388 | orchestrator | Thursday 19 February 2026 06:30:07 +0000 (0:00:01.106) 0:46:53.605 ***** 2026-02-19 06:30:20.951395 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.951402 | orchestrator | 2026-02-19 06:30:20.951410 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-19 06:30:20.951418 | orchestrator | Thursday 19 February 2026 06:30:08 +0000 (0:00:01.468) 0:46:55.073 ***** 2026-02-19 06:30:20.951425 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.951432 | orchestrator | 2026-02-19 06:30:20.951440 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-19 06:30:20.951448 | orchestrator | Thursday 19 February 2026 06:30:12 +0000 (0:00:03.595) 0:46:58.669 ***** 2026-02-19 06:30:20.951456 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-19 06:30:20.951463 | orchestrator | 2026-02-19 06:30:20.951470 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-19 06:30:20.951478 | orchestrator | Thursday 19 February 2026 06:30:13 +0000 (0:00:01.113) 0:46:59.782 ***** 2026-02-19 06:30:20.951485 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.951493 | orchestrator | 2026-02-19 06:30:20.951502 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-19 06:30:20.951510 | orchestrator | Thursday 19 February 2026 06:30:15 +0000 (0:00:02.033) 0:47:01.815 ***** 2026-02-19 06:30:20.951526 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.951535 | orchestrator | 2026-02-19 06:30:20.951543 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-19 06:30:20.951550 | orchestrator | Thursday 19 February 2026 06:30:17 +0000 (0:00:01.891) 0:47:03.707 ***** 2026-02-19 06:30:20.951558 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:30:20.951565 | orchestrator | 2026-02-19 06:30:20.951573 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-19 06:30:20.951581 | orchestrator | Thursday 19 February 2026 06:30:19 +0000 (0:00:02.288) 0:47:05.996 ***** 2026-02-19 06:30:20.951590 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:30:20.951598 | orchestrator | 2026-02-19 06:30:20.951619 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-19 06:32:41.075249 | orchestrator | Thursday 19 February 2026 06:30:20 +0000 (0:00:01.168) 0:47:07.165 ***** 2026-02-19 06:32:41.075343 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075355 | orchestrator | 2026-02-19 06:32:41.075363 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-19 06:32:41.075370 | orchestrator | Thursday 19 February 2026 06:30:22 +0000 (0:00:01.167) 0:47:08.333 ***** 2026-02-19 06:32:41.075396 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-02-19 06:32:41.075403 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-19 06:32:41.075410 | orchestrator | 2026-02-19 06:32:41.075417 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-19 06:32:41.075423 | orchestrator | Thursday 19 February 2026 06:30:23 +0000 (0:00:01.855) 0:47:10.189 ***** 2026-02-19 06:32:41.075430 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-02-19 06:32:41.075436 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-19 06:32:41.075442 | orchestrator | 2026-02-19 06:32:41.075449 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-19 06:32:41.075455 | orchestrator | Thursday 19 February 2026 06:30:26 +0000 (0:00:02.908) 0:47:13.097 ***** 2026-02-19 06:32:41.075462 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-19 06:32:41.075469 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-19 06:32:41.075475 | orchestrator | 2026-02-19 06:32:41.075482 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-19 06:32:41.075489 | orchestrator | Thursday 19 February 2026 06:30:31 +0000 (0:00:04.357) 0:47:17.455 ***** 2026-02-19 06:32:41.075500 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075510 | orchestrator | 2026-02-19 06:32:41.075521 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-19 06:32:41.075532 | orchestrator | Thursday 19 February 2026 06:30:32 +0000 (0:00:00.856) 0:47:18.311 ***** 2026-02-19 06:32:41.075542 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-19 06:32:41.075555 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:32:41.075565 | orchestrator | 2026-02-19 06:32:41.075576 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-19 06:32:41.075586 | orchestrator | Thursday 19 February 2026 06:30:45 +0000 (0:00:13.514) 0:47:31.825 ***** 2026-02-19 06:32:41.075595 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075604 | orchestrator | 2026-02-19 06:32:41.075616 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-19 06:32:41.075626 | orchestrator | Thursday 19 February 2026 06:30:46 +0000 (0:00:00.854) 0:47:32.681 ***** 2026-02-19 06:32:41.075637 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075647 | orchestrator | 2026-02-19 06:32:41.075659 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-19 06:32:41.075670 | orchestrator | Thursday 19 February 2026 06:30:47 +0000 (0:00:00.775) 0:47:33.456 ***** 2026-02-19 06:32:41.075676 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075683 | orchestrator | 2026-02-19 06:32:41.075689 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-19 06:32:41.075717 | orchestrator | Thursday 19 February 2026 06:30:47 +0000 (0:00:00.749) 0:47:34.205 ***** 2026-02-19 06:32:41.075723 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:32:41.075730 | orchestrator | 2026-02-19 06:32:41.075736 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-19 06:32:41.075742 | orchestrator | Thursday 19 February 2026 06:30:49 +0000 (0:00:01.957) 0:47:36.163 ***** 2026-02-19 06:32:41.075748 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075754 | orchestrator | 2026-02-19 06:32:41.075761 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-19 06:32:41.075853 | orchestrator | Thursday 19 February 2026 06:30:50 +0000 (0:00:00.751) 0:47:36.914 ***** 2026-02-19 06:32:41.075863 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075871 | orchestrator | 2026-02-19 06:32:41.075879 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-19 06:32:41.075886 | orchestrator | Thursday 19 February 2026 06:30:51 +0000 (0:00:00.760) 0:47:37.675 ***** 2026-02-19 06:32:41.075892 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075899 | orchestrator | 2026-02-19 06:32:41.075905 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-19 06:32:41.075911 | orchestrator | Thursday 19 February 2026 06:30:52 +0000 (0:00:00.790) 0:47:38.465 ***** 2026-02-19 06:32:41.075918 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075924 | orchestrator | 2026-02-19 06:32:41.075930 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-19 06:32:41.075937 | orchestrator | Thursday 19 February 2026 06:30:52 +0000 (0:00:00.750) 0:47:39.216 ***** 2026-02-19 06:32:41.075943 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075949 | orchestrator | 2026-02-19 06:32:41.075956 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-19 06:32:41.075962 | orchestrator | Thursday 19 February 2026 06:30:53 +0000 (0:00:00.760) 0:47:39.977 ***** 2026-02-19 06:32:41.075968 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.075975 | orchestrator | 2026-02-19 06:32:41.075981 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-19 06:32:41.075987 | orchestrator | Thursday 19 February 2026 06:30:54 +0000 (0:00:00.778) 0:47:40.755 ***** 2026-02-19 06:32:41.075994 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:32:41.076000 | orchestrator | 2026-02-19 06:32:41.076006 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-19 06:32:41.076013 | orchestrator | 2026-02-19 06:32:41.076019 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:32:41.076025 | orchestrator | Thursday 19 February 2026 06:30:56 +0000 (0:00:01.743) 0:47:42.498 ***** 2026-02-19 06:32:41.076032 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:32:41.076039 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:32:41.076045 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:32:41.076051 | orchestrator | 2026-02-19 06:32:41.076058 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:32:41.076082 | orchestrator | Thursday 19 February 2026 06:30:57 +0000 (0:00:01.628) 0:47:44.127 ***** 2026-02-19 06:32:41.076089 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:32:41.076095 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:32:41.076101 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:32:41.076108 | orchestrator | 2026-02-19 06:32:41.076114 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-19 06:32:41.076120 | orchestrator | Thursday 19 February 2026 06:30:59 +0000 (0:00:01.374) 0:47:45.501 ***** 2026-02-19 06:32:41.076127 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-19 06:32:41.076133 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-19 06:32:41.076140 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-19 06:32:41.076153 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-19 06:32:41.076161 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-19 06:32:41.076168 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-19 06:32:41.076174 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-19 06:32:41.076181 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-19 06:32:41.076188 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-19 06:32:41.076195 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-19 06:32:41.076201 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-19 06:32:41.076208 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-19 06:32:41.076214 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-19 06:32:41.076221 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-19 06:32:41.076228 | orchestrator | 2026-02-19 06:32:41.076235 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-19 06:32:41.076241 | orchestrator | Thursday 19 February 2026 06:32:21 +0000 (0:01:22.024) 0:49:07.526 ***** 2026-02-19 06:32:41.076248 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-19 06:32:41.076254 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-19 06:32:41.076261 | orchestrator | 2026-02-19 06:32:41.076267 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-19 06:32:41.076274 | orchestrator | Thursday 19 February 2026 06:32:27 +0000 (0:00:06.563) 0:49:14.089 ***** 2026-02-19 06:32:41.076280 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:32:41.076286 | orchestrator | 2026-02-19 06:32:41.076291 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-19 06:32:41.076297 | orchestrator | 2026-02-19 06:32:41.076303 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:32:41.076312 | orchestrator | Thursday 19 February 2026 06:32:31 +0000 (0:00:03.456) 0:49:17.545 ***** 2026-02-19 06:32:41.076318 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-19 06:32:41.076324 | orchestrator | 2026-02-19 06:32:41.076330 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:32:41.076336 | orchestrator | Thursday 19 February 2026 06:32:32 +0000 (0:00:01.124) 0:49:18.670 ***** 2026-02-19 06:32:41.076341 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:32:41.076347 | orchestrator | 2026-02-19 06:32:41.076353 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:32:41.076359 | orchestrator | Thursday 19 February 2026 06:32:33 +0000 (0:00:01.478) 0:49:20.149 ***** 2026-02-19 06:32:41.076364 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:32:41.076370 | orchestrator | 2026-02-19 06:32:41.076376 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:32:41.076382 | orchestrator | Thursday 19 February 2026 06:32:35 +0000 (0:00:01.118) 0:49:21.268 ***** 2026-02-19 06:32:41.076387 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:32:41.076393 | orchestrator | 2026-02-19 06:32:41.076399 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:32:41.076405 | orchestrator | Thursday 19 February 2026 06:32:36 +0000 (0:00:01.499) 0:49:22.768 ***** 2026-02-19 06:32:41.076410 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:32:41.076421 | orchestrator | 2026-02-19 06:32:41.076426 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:32:41.076432 | orchestrator | Thursday 19 February 2026 06:32:37 +0000 (0:00:01.126) 0:49:23.895 ***** 2026-02-19 06:32:41.076438 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:32:41.076444 | orchestrator | 2026-02-19 06:32:41.076449 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:32:41.076455 | orchestrator | Thursday 19 February 2026 06:32:38 +0000 (0:00:01.153) 0:49:25.049 ***** 2026-02-19 06:32:41.076461 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:32:41.076467 | orchestrator | 2026-02-19 06:32:41.076472 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:32:41.076478 | orchestrator | Thursday 19 February 2026 06:32:39 +0000 (0:00:01.128) 0:49:26.178 ***** 2026-02-19 06:32:41.076487 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:05.317326 | orchestrator | 2026-02-19 06:33:05.317451 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:33:05.317466 | orchestrator | Thursday 19 February 2026 06:32:41 +0000 (0:00:01.113) 0:49:27.291 ***** 2026-02-19 06:33:05.317477 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:05.317487 | orchestrator | 2026-02-19 06:33:05.317497 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:33:05.317506 | orchestrator | Thursday 19 February 2026 06:32:42 +0000 (0:00:01.118) 0:49:28.410 ***** 2026-02-19 06:33:05.317515 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:33:05.317524 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:33:05.317533 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:33:05.317542 | orchestrator | 2026-02-19 06:33:05.317550 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:33:05.317559 | orchestrator | Thursday 19 February 2026 06:32:43 +0000 (0:00:01.652) 0:49:30.063 ***** 2026-02-19 06:33:05.317568 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:05.317576 | orchestrator | 2026-02-19 06:33:05.317585 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:33:05.317594 | orchestrator | Thursday 19 February 2026 06:32:45 +0000 (0:00:01.244) 0:49:31.307 ***** 2026-02-19 06:33:05.317602 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:33:05.317611 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:33:05.317620 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:33:05.317628 | orchestrator | 2026-02-19 06:33:05.317637 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:33:05.317645 | orchestrator | Thursday 19 February 2026 06:32:48 +0000 (0:00:03.007) 0:49:34.315 ***** 2026-02-19 06:33:05.317655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 06:33:05.317663 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 06:33:05.317672 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 06:33:05.317681 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:05.317689 | orchestrator | 2026-02-19 06:33:05.317698 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:33:05.317707 | orchestrator | Thursday 19 February 2026 06:32:49 +0000 (0:00:01.506) 0:49:35.821 ***** 2026-02-19 06:33:05.317717 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:33:05.317729 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:33:05.317820 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:33:05.317833 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:05.317841 | orchestrator | 2026-02-19 06:33:05.317863 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:33:05.317875 | orchestrator | Thursday 19 February 2026 06:32:51 +0000 (0:00:01.725) 0:49:37.547 ***** 2026-02-19 06:33:05.317887 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:05.317900 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:05.317910 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:05.317920 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:05.317930 | orchestrator | 2026-02-19 06:33:05.317940 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:33:05.317966 | orchestrator | Thursday 19 February 2026 06:32:52 +0000 (0:00:01.245) 0:49:38.792 ***** 2026-02-19 06:33:05.317978 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:32:45.673636', 'end': '2026-02-19 06:32:45.741624', 'delta': '0:00:00.067988', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:33:05.317992 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:32:46.296546', 'end': '2026-02-19 06:32:46.346766', 'delta': '0:00:00.050220', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:33:05.318002 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:32:46.876303', 'end': '2026-02-19 06:32:46.921842', 'delta': '0:00:00.045539', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:33:05.318079 | orchestrator | 2026-02-19 06:33:05.318097 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:33:05.318114 | orchestrator | Thursday 19 February 2026 06:32:53 +0000 (0:00:01.251) 0:49:40.044 ***** 2026-02-19 06:33:05.318130 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:05.318145 | orchestrator | 2026-02-19 06:33:05.318164 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:33:05.318173 | orchestrator | Thursday 19 February 2026 06:32:55 +0000 (0:00:01.270) 0:49:41.315 ***** 2026-02-19 06:33:05.318182 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:05.318190 | orchestrator | 2026-02-19 06:33:05.318199 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:33:05.318208 | orchestrator | Thursday 19 February 2026 06:32:56 +0000 (0:00:01.266) 0:49:42.581 ***** 2026-02-19 06:33:05.318216 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:05.318225 | orchestrator | 2026-02-19 06:33:05.318233 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:33:05.318242 | orchestrator | Thursday 19 February 2026 06:32:57 +0000 (0:00:01.119) 0:49:43.701 ***** 2026-02-19 06:33:05.318250 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:05.318259 | orchestrator | 2026-02-19 06:33:05.318268 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:33:05.318276 | orchestrator | Thursday 19 February 2026 06:32:59 +0000 (0:00:02.009) 0:49:45.711 ***** 2026-02-19 06:33:05.318285 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:05.318293 | orchestrator | 2026-02-19 06:33:05.318302 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:33:05.318310 | orchestrator | Thursday 19 February 2026 06:33:00 +0000 (0:00:01.125) 0:49:46.836 ***** 2026-02-19 06:33:05.318319 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:05.318327 | orchestrator | 2026-02-19 06:33:05.318336 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:33:05.318345 | orchestrator | Thursday 19 February 2026 06:33:01 +0000 (0:00:01.185) 0:49:48.021 ***** 2026-02-19 06:33:05.318354 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:05.318362 | orchestrator | 2026-02-19 06:33:05.318371 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:33:05.318380 | orchestrator | Thursday 19 February 2026 06:33:03 +0000 (0:00:01.246) 0:49:49.268 ***** 2026-02-19 06:33:05.318388 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:05.318397 | orchestrator | 2026-02-19 06:33:05.318405 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:33:05.318414 | orchestrator | Thursday 19 February 2026 06:33:04 +0000 (0:00:01.108) 0:49:50.376 ***** 2026-02-19 06:33:05.318429 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:13.515858 | orchestrator | 2026-02-19 06:33:13.515964 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:33:13.515980 | orchestrator | Thursday 19 February 2026 06:33:05 +0000 (0:00:01.155) 0:49:51.532 ***** 2026-02-19 06:33:13.515991 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:13.516003 | orchestrator | 2026-02-19 06:33:13.516013 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:33:13.516024 | orchestrator | Thursday 19 February 2026 06:33:06 +0000 (0:00:01.124) 0:49:52.656 ***** 2026-02-19 06:33:13.516033 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:13.516043 | orchestrator | 2026-02-19 06:33:13.516053 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:33:13.516086 | orchestrator | Thursday 19 February 2026 06:33:07 +0000 (0:00:01.132) 0:49:53.789 ***** 2026-02-19 06:33:13.516097 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:13.516106 | orchestrator | 2026-02-19 06:33:13.516116 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:33:13.516126 | orchestrator | Thursday 19 February 2026 06:33:08 +0000 (0:00:01.126) 0:49:54.915 ***** 2026-02-19 06:33:13.516135 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:13.516145 | orchestrator | 2026-02-19 06:33:13.516155 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:33:13.516165 | orchestrator | Thursday 19 February 2026 06:33:09 +0000 (0:00:01.119) 0:49:56.035 ***** 2026-02-19 06:33:13.516174 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:13.516184 | orchestrator | 2026-02-19 06:33:13.516193 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:33:13.516203 | orchestrator | Thursday 19 February 2026 06:33:10 +0000 (0:00:01.122) 0:49:57.158 ***** 2026-02-19 06:33:13.516215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:33:13.516228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:33:13.516239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:33:13.516264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:33:13.516277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:33:13.516287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:33:13.516314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:33:13.516337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d17f80a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:33:13.516355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:33:13.516368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:33:13.516379 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:13.516390 | orchestrator | 2026-02-19 06:33:13.516401 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:33:13.516412 | orchestrator | Thursday 19 February 2026 06:33:12 +0000 (0:00:01.282) 0:49:58.440 ***** 2026-02-19 06:33:13.516424 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:13.516449 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.233694 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.233883 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-18-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.233906 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.233936 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.233950 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.234012 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d17f80a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_2d17f80a-f41c-4c05-91d8-d602b7f93b84-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.234089 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.234103 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:33:21.234115 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:21.234128 | orchestrator | 2026-02-19 06:33:21.234141 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:33:21.234162 | orchestrator | Thursday 19 February 2026 06:33:13 +0000 (0:00:01.295) 0:49:59.736 ***** 2026-02-19 06:33:21.234173 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:21.234185 | orchestrator | 2026-02-19 06:33:21.234196 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:33:21.234207 | orchestrator | Thursday 19 February 2026 06:33:15 +0000 (0:00:01.571) 0:50:01.308 ***** 2026-02-19 06:33:21.234218 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:21.234228 | orchestrator | 2026-02-19 06:33:21.234239 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:33:21.234250 | orchestrator | Thursday 19 February 2026 06:33:16 +0000 (0:00:01.151) 0:50:02.459 ***** 2026-02-19 06:33:21.234260 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:33:21.234271 | orchestrator | 2026-02-19 06:33:21.234282 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:33:21.234293 | orchestrator | Thursday 19 February 2026 06:33:17 +0000 (0:00:01.521) 0:50:03.981 ***** 2026-02-19 06:33:21.234304 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:21.234314 | orchestrator | 2026-02-19 06:33:21.234325 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:33:21.234336 | orchestrator | Thursday 19 February 2026 06:33:18 +0000 (0:00:01.117) 0:50:05.098 ***** 2026-02-19 06:33:21.234346 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:21.234357 | orchestrator | 2026-02-19 06:33:21.234368 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:33:21.234379 | orchestrator | Thursday 19 February 2026 06:33:20 +0000 (0:00:01.213) 0:50:06.312 ***** 2026-02-19 06:33:21.234389 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:33:21.234400 | orchestrator | 2026-02-19 06:33:21.234411 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:33:21.234430 | orchestrator | Thursday 19 February 2026 06:33:21 +0000 (0:00:01.136) 0:50:07.448 ***** 2026-02-19 06:34:12.763762 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:34:12.763911 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-19 06:34:12.763923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-19 06:34:12.763930 | orchestrator | 2026-02-19 06:34:12.763938 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:34:12.763945 | orchestrator | Thursday 19 February 2026 06:33:22 +0000 (0:00:01.682) 0:50:09.131 ***** 2026-02-19 06:34:12.763952 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-19 06:34:12.763958 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-19 06:34:12.763964 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-19 06:34:12.763970 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:34:12.763976 | orchestrator | 2026-02-19 06:34:12.763982 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:34:12.763988 | orchestrator | Thursday 19 February 2026 06:33:24 +0000 (0:00:01.244) 0:50:10.375 ***** 2026-02-19 06:34:12.763994 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:34:12.764000 | orchestrator | 2026-02-19 06:34:12.764005 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:34:12.764011 | orchestrator | Thursday 19 February 2026 06:33:25 +0000 (0:00:01.175) 0:50:11.551 ***** 2026-02-19 06:34:12.764017 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:34:12.764023 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:34:12.764029 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:34:12.764035 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:34:12.764041 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:34:12.764046 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:34:12.764072 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:34:12.764079 | orchestrator | 2026-02-19 06:34:12.764084 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:34:12.764090 | orchestrator | Thursday 19 February 2026 06:33:27 +0000 (0:00:02.122) 0:50:13.674 ***** 2026-02-19 06:34:12.764096 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-19 06:34:12.764101 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:34:12.764107 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:34:12.764113 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:34:12.764130 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:34:12.764136 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:34:12.764141 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:34:12.764147 | orchestrator | 2026-02-19 06:34:12.764153 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-19 06:34:12.764159 | orchestrator | Thursday 19 February 2026 06:33:30 +0000 (0:00:02.606) 0:50:16.281 ***** 2026-02-19 06:34:12.764164 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:34:12.764170 | orchestrator | 2026-02-19 06:34:12.764176 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-19 06:34:12.764182 | orchestrator | Thursday 19 February 2026 06:33:33 +0000 (0:00:03.220) 0:50:19.501 ***** 2026-02-19 06:34:12.764187 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:34:12.764193 | orchestrator | 2026-02-19 06:34:12.764199 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-19 06:34:12.764204 | orchestrator | Thursday 19 February 2026 06:33:36 +0000 (0:00:03.229) 0:50:22.731 ***** 2026-02-19 06:34:12.764210 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:34:12.764216 | orchestrator | 2026-02-19 06:34:12.764222 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-19 06:34:12.764227 | orchestrator | Thursday 19 February 2026 06:33:38 +0000 (0:00:02.182) 0:50:24.914 ***** 2026-02-19 06:34:12.764236 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4728', 'value': {'gid': 4728, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/2266415785', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 2266415785}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 2266415785}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-19 06:34:12.764244 | orchestrator | 2026-02-19 06:34:12.764250 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-19 06:34:12.764256 | orchestrator | Thursday 19 February 2026 06:33:39 +0000 (0:00:01.140) 0:50:26.054 ***** 2026-02-19 06:34:12.764284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-19 06:34:12.764291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-19 06:34:12.764297 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-02-19 06:34:12.764303 | orchestrator | 2026-02-19 06:34:12.764309 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-19 06:34:12.764315 | orchestrator | Thursday 19 February 2026 06:33:41 +0000 (0:00:01.913) 0:50:27.968 ***** 2026-02-19 06:34:12.764320 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-02-19 06:34:12.764332 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-02-19 06:34:12.764338 | orchestrator | 2026-02-19 06:34:12.764344 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-19 06:34:12.764350 | orchestrator | Thursday 19 February 2026 06:33:43 +0000 (0:00:01.462) 0:50:29.431 ***** 2026-02-19 06:34:12.764355 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:34:12.764361 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:34:12.764367 | orchestrator | 2026-02-19 06:34:12.764372 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-19 06:34:12.764378 | orchestrator | Thursday 19 February 2026 06:33:51 +0000 (0:00:08.378) 0:50:37.810 ***** 2026-02-19 06:34:12.764384 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:34:12.764390 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:34:12.764395 | orchestrator | 2026-02-19 06:34:12.764401 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-19 06:34:12.764407 | orchestrator | Thursday 19 February 2026 06:33:55 +0000 (0:00:03.821) 0:50:41.631 ***** 2026-02-19 06:34:12.764413 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:34:12.764419 | orchestrator | 2026-02-19 06:34:12.764424 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-19 06:34:12.764430 | orchestrator | Thursday 19 February 2026 06:33:57 +0000 (0:00:02.235) 0:50:43.867 ***** 2026-02-19 06:34:12.764436 | orchestrator | changed: [testbed-node-0] 2026-02-19 06:34:12.764442 | orchestrator | 2026-02-19 06:34:12.764448 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-19 06:34:12.764453 | orchestrator | 2026-02-19 06:34:12.764459 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:34:12.764465 | orchestrator | Thursday 19 February 2026 06:33:59 +0000 (0:00:01.467) 0:50:45.334 ***** 2026-02-19 06:34:12.764471 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-19 06:34:12.764477 | orchestrator | 2026-02-19 06:34:12.764483 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:34:12.764492 | orchestrator | Thursday 19 February 2026 06:34:00 +0000 (0:00:01.096) 0:50:46.431 ***** 2026-02-19 06:34:12.764498 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:12.764504 | orchestrator | 2026-02-19 06:34:12.764510 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:34:12.764515 | orchestrator | Thursday 19 February 2026 06:34:01 +0000 (0:00:01.462) 0:50:47.893 ***** 2026-02-19 06:34:12.764521 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:12.764527 | orchestrator | 2026-02-19 06:34:12.764533 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:34:12.764538 | orchestrator | Thursday 19 February 2026 06:34:02 +0000 (0:00:01.133) 0:50:49.027 ***** 2026-02-19 06:34:12.764544 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:12.764550 | orchestrator | 2026-02-19 06:34:12.764556 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:34:12.764562 | orchestrator | Thursday 19 February 2026 06:34:04 +0000 (0:00:01.435) 0:50:50.463 ***** 2026-02-19 06:34:12.764567 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:12.764573 | orchestrator | 2026-02-19 06:34:12.764579 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:34:12.764585 | orchestrator | Thursday 19 February 2026 06:34:05 +0000 (0:00:01.135) 0:50:51.598 ***** 2026-02-19 06:34:12.764591 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:12.764596 | orchestrator | 2026-02-19 06:34:12.764602 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:34:12.764608 | orchestrator | Thursday 19 February 2026 06:34:06 +0000 (0:00:01.113) 0:50:52.712 ***** 2026-02-19 06:34:12.764614 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:12.764624 | orchestrator | 2026-02-19 06:34:12.764629 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:34:12.764635 | orchestrator | Thursday 19 February 2026 06:34:07 +0000 (0:00:01.129) 0:50:53.842 ***** 2026-02-19 06:34:12.764641 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:12.764647 | orchestrator | 2026-02-19 06:34:12.764653 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:34:12.764658 | orchestrator | Thursday 19 February 2026 06:34:08 +0000 (0:00:01.132) 0:50:54.975 ***** 2026-02-19 06:34:12.764664 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:12.764670 | orchestrator | 2026-02-19 06:34:12.764676 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:34:12.764681 | orchestrator | Thursday 19 February 2026 06:34:09 +0000 (0:00:01.095) 0:50:56.070 ***** 2026-02-19 06:34:12.764687 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:34:12.764695 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:34:12.764719 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:34:12.764730 | orchestrator | 2026-02-19 06:34:12.764739 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:34:12.764748 | orchestrator | Thursday 19 February 2026 06:34:11 +0000 (0:00:01.676) 0:50:57.746 ***** 2026-02-19 06:34:12.764757 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:12.764766 | orchestrator | 2026-02-19 06:34:12.764783 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:34:37.625254 | orchestrator | Thursday 19 February 2026 06:34:12 +0000 (0:00:01.230) 0:50:58.977 ***** 2026-02-19 06:34:37.625383 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:34:37.625406 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:34:37.625421 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:34:37.625433 | orchestrator | 2026-02-19 06:34:37.625447 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:34:37.625459 | orchestrator | Thursday 19 February 2026 06:34:15 +0000 (0:00:02.890) 0:51:01.867 ***** 2026-02-19 06:34:37.625473 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 06:34:37.625486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 06:34:37.625498 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 06:34:37.625511 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.625523 | orchestrator | 2026-02-19 06:34:37.625537 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:34:37.625550 | orchestrator | Thursday 19 February 2026 06:34:17 +0000 (0:00:01.413) 0:51:03.281 ***** 2026-02-19 06:34:37.625567 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:34:37.625583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:34:37.625596 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:34:37.625609 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.625621 | orchestrator | 2026-02-19 06:34:37.625634 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:34:37.625647 | orchestrator | Thursday 19 February 2026 06:34:18 +0000 (0:00:01.913) 0:51:05.195 ***** 2026-02-19 06:34:37.625736 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:37.625757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:37.625771 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:37.625783 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.625796 | orchestrator | 2026-02-19 06:34:37.625808 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:34:37.625820 | orchestrator | Thursday 19 February 2026 06:34:20 +0000 (0:00:01.152) 0:51:06.347 ***** 2026-02-19 06:34:37.625833 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:34:13.343604', 'end': '2026-02-19 06:34:13.408651', 'delta': '0:00:00.065047', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:34:37.625872 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:34:13.912720', 'end': '2026-02-19 06:34:13.960496', 'delta': '0:00:00.047776', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:34:37.625889 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:34:14.483581', 'end': '2026-02-19 06:34:14.523410', 'delta': '0:00:00.039829', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:34:37.625967 | orchestrator | 2026-02-19 06:34:37.625985 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:34:37.625999 | orchestrator | Thursday 19 February 2026 06:34:21 +0000 (0:00:01.202) 0:51:07.550 ***** 2026-02-19 06:34:37.626012 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:37.626093 | orchestrator | 2026-02-19 06:34:37.626108 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:34:37.626121 | orchestrator | Thursday 19 February 2026 06:34:22 +0000 (0:00:01.265) 0:51:08.816 ***** 2026-02-19 06:34:37.626134 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.626145 | orchestrator | 2026-02-19 06:34:37.626164 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:34:37.626174 | orchestrator | Thursday 19 February 2026 06:34:24 +0000 (0:00:01.605) 0:51:10.422 ***** 2026-02-19 06:34:37.626184 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:37.626197 | orchestrator | 2026-02-19 06:34:37.626210 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:34:37.626222 | orchestrator | Thursday 19 February 2026 06:34:25 +0000 (0:00:01.115) 0:51:11.537 ***** 2026-02-19 06:34:37.626235 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:34:37.626247 | orchestrator | 2026-02-19 06:34:37.626259 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:34:37.626270 | orchestrator | Thursday 19 February 2026 06:34:27 +0000 (0:00:02.081) 0:51:13.619 ***** 2026-02-19 06:34:37.626282 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:37.626294 | orchestrator | 2026-02-19 06:34:37.626305 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:34:37.626316 | orchestrator | Thursday 19 February 2026 06:34:28 +0000 (0:00:01.127) 0:51:14.746 ***** 2026-02-19 06:34:37.626327 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.626339 | orchestrator | 2026-02-19 06:34:37.626350 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:34:37.626362 | orchestrator | Thursday 19 February 2026 06:34:29 +0000 (0:00:01.093) 0:51:15.839 ***** 2026-02-19 06:34:37.626373 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.626384 | orchestrator | 2026-02-19 06:34:37.626396 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:34:37.626407 | orchestrator | Thursday 19 February 2026 06:34:30 +0000 (0:00:01.212) 0:51:17.052 ***** 2026-02-19 06:34:37.626419 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.626430 | orchestrator | 2026-02-19 06:34:37.626441 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:34:37.626454 | orchestrator | Thursday 19 February 2026 06:34:31 +0000 (0:00:01.089) 0:51:18.141 ***** 2026-02-19 06:34:37.626467 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.626479 | orchestrator | 2026-02-19 06:34:37.626491 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:34:37.626502 | orchestrator | Thursday 19 February 2026 06:34:33 +0000 (0:00:01.095) 0:51:19.237 ***** 2026-02-19 06:34:37.626516 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:37.626528 | orchestrator | 2026-02-19 06:34:37.626542 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:34:37.626556 | orchestrator | Thursday 19 February 2026 06:34:34 +0000 (0:00:01.159) 0:51:20.396 ***** 2026-02-19 06:34:37.626569 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.626583 | orchestrator | 2026-02-19 06:34:37.626597 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:34:37.626610 | orchestrator | Thursday 19 February 2026 06:34:35 +0000 (0:00:01.140) 0:51:21.537 ***** 2026-02-19 06:34:37.626623 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:37.626636 | orchestrator | 2026-02-19 06:34:37.626649 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:34:37.626662 | orchestrator | Thursday 19 February 2026 06:34:36 +0000 (0:00:01.166) 0:51:22.704 ***** 2026-02-19 06:34:37.626713 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:37.626756 | orchestrator | 2026-02-19 06:34:39.009795 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:34:39.009925 | orchestrator | Thursday 19 February 2026 06:34:37 +0000 (0:00:01.135) 0:51:23.840 ***** 2026-02-19 06:34:39.009949 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:39.009968 | orchestrator | 2026-02-19 06:34:39.009988 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:34:39.010004 | orchestrator | Thursday 19 February 2026 06:34:38 +0000 (0:00:01.164) 0:51:25.004 ***** 2026-02-19 06:34:39.010156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:34:39.010186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'uuids': ['ca7295e3-b0e7-43de-a68b-3daf29557592'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2']}})  2026-02-19 06:34:39.010227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '74afed04', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:34:39.010248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210']}})  2026-02-19 06:34:39.010269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:34:39.010289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:34:39.010372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:34:39.010397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:34:39.010417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM', 'dm-uuid-CRYPT-LUKS2-0386b2e9039d452a9d925bb7d9e8a516-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:34:39.010438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:34:39.010468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'uuids': ['0386b2e9-039d-452a-9d92-5bb7d9e8a516'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM']}})  2026-02-19 06:34:39.010491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456']}})  2026-02-19 06:34:39.010511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:34:39.010570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b283ac38', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:34:40.330509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:34:40.330623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:34:40.330642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2', 'dm-uuid-CRYPT-LUKS2-ca7295e3b0e743dea68b3daf29557592-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:34:40.330659 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:40.330674 | orchestrator | 2026-02-19 06:34:40.330744 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:34:40.330760 | orchestrator | Thursday 19 February 2026 06:34:40 +0000 (0:00:01.316) 0:51:26.321 ***** 2026-02-19 06:34:40.330804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:40.330820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'uuids': ['ca7295e3-b0e7-43de-a68b-3daf29557592'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:40.330837 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '74afed04', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:40.330900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:40.330913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:40.330929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:40.330938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:40.330948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:40.330964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM', 'dm-uuid-CRYPT-LUKS2-0386b2e9039d452a9d925bb7d9e8a516-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591319 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'uuids': ['0386b2e9-039d-452a-9d92-5bb7d9e8a516'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591429 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591440 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591466 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b283ac38', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2', 'dm-uuid-CRYPT-LUKS2-ca7295e3b0e743dea68b3daf29557592-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:34:45.591501 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:34:45.591510 | orchestrator | 2026-02-19 06:34:45.591517 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:34:45.591524 | orchestrator | Thursday 19 February 2026 06:34:41 +0000 (0:00:01.402) 0:51:27.723 ***** 2026-02-19 06:34:45.591531 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:45.591538 | orchestrator | 2026-02-19 06:34:45.591544 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:34:45.591551 | orchestrator | Thursday 19 February 2026 06:34:43 +0000 (0:00:01.513) 0:51:29.237 ***** 2026-02-19 06:34:45.591557 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:45.591563 | orchestrator | 2026-02-19 06:34:45.591569 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:34:45.591575 | orchestrator | Thursday 19 February 2026 06:34:44 +0000 (0:00:01.104) 0:51:30.341 ***** 2026-02-19 06:34:45.591581 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:34:45.591587 | orchestrator | 2026-02-19 06:34:45.591594 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:34:45.591604 | orchestrator | Thursday 19 February 2026 06:34:45 +0000 (0:00:01.466) 0:51:31.808 ***** 2026-02-19 06:35:26.068368 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.068505 | orchestrator | 2026-02-19 06:35:26.068520 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:35:26.068550 | orchestrator | Thursday 19 February 2026 06:34:46 +0000 (0:00:01.110) 0:51:32.918 ***** 2026-02-19 06:35:26.068560 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.068569 | orchestrator | 2026-02-19 06:35:26.068579 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:35:26.068615 | orchestrator | Thursday 19 February 2026 06:34:47 +0000 (0:00:01.218) 0:51:34.137 ***** 2026-02-19 06:35:26.068625 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.068633 | orchestrator | 2026-02-19 06:35:26.068642 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:35:26.068651 | orchestrator | Thursday 19 February 2026 06:34:49 +0000 (0:00:01.146) 0:51:35.284 ***** 2026-02-19 06:35:26.068745 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-19 06:35:26.068761 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-19 06:35:26.068774 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-19 06:35:26.068788 | orchestrator | 2026-02-19 06:35:26.068802 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:35:26.068815 | orchestrator | Thursday 19 February 2026 06:34:51 +0000 (0:00:01.964) 0:51:37.248 ***** 2026-02-19 06:35:26.068829 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 06:35:26.068844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 06:35:26.068857 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 06:35:26.068871 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.068885 | orchestrator | 2026-02-19 06:35:26.068899 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:35:26.068913 | orchestrator | Thursday 19 February 2026 06:34:52 +0000 (0:00:01.131) 0:51:38.380 ***** 2026-02-19 06:35:26.068928 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-19 06:35:26.068943 | orchestrator | 2026-02-19 06:35:26.068958 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:35:26.068974 | orchestrator | Thursday 19 February 2026 06:34:53 +0000 (0:00:01.129) 0:51:39.510 ***** 2026-02-19 06:35:26.068988 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.069002 | orchestrator | 2026-02-19 06:35:26.069017 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:35:26.069032 | orchestrator | Thursday 19 February 2026 06:34:54 +0000 (0:00:01.119) 0:51:40.629 ***** 2026-02-19 06:35:26.069045 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.069059 | orchestrator | 2026-02-19 06:35:26.069074 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:35:26.069088 | orchestrator | Thursday 19 February 2026 06:34:55 +0000 (0:00:01.120) 0:51:41.750 ***** 2026-02-19 06:35:26.069102 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.069116 | orchestrator | 2026-02-19 06:35:26.069131 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:35:26.069145 | orchestrator | Thursday 19 February 2026 06:34:56 +0000 (0:00:01.130) 0:51:42.881 ***** 2026-02-19 06:35:26.069159 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:35:26.069174 | orchestrator | 2026-02-19 06:35:26.069188 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:35:26.069202 | orchestrator | Thursday 19 February 2026 06:34:57 +0000 (0:00:01.262) 0:51:44.143 ***** 2026-02-19 06:35:26.069216 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:35:26.069230 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:35:26.069244 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:35:26.069258 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.069272 | orchestrator | 2026-02-19 06:35:26.069286 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:35:26.069300 | orchestrator | Thursday 19 February 2026 06:34:59 +0000 (0:00:01.405) 0:51:45.549 ***** 2026-02-19 06:35:26.069314 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:35:26.069327 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:35:26.069341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:35:26.069369 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.069383 | orchestrator | 2026-02-19 06:35:26.069397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:35:26.069411 | orchestrator | Thursday 19 February 2026 06:35:00 +0000 (0:00:01.503) 0:51:47.053 ***** 2026-02-19 06:35:26.069425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:35:26.069439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:35:26.069453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:35:26.069467 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.069482 | orchestrator | 2026-02-19 06:35:26.069496 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:35:26.069510 | orchestrator | Thursday 19 February 2026 06:35:02 +0000 (0:00:01.355) 0:51:48.408 ***** 2026-02-19 06:35:26.069524 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:35:26.069538 | orchestrator | 2026-02-19 06:35:26.069552 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:35:26.069566 | orchestrator | Thursday 19 February 2026 06:35:03 +0000 (0:00:01.127) 0:51:49.535 ***** 2026-02-19 06:35:26.069578 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 06:35:26.069594 | orchestrator | 2026-02-19 06:35:26.069607 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:35:26.069621 | orchestrator | Thursday 19 February 2026 06:35:04 +0000 (0:00:01.319) 0:51:50.855 ***** 2026-02-19 06:35:26.069684 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:35:26.069701 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:35:26.069723 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:35:26.069737 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:35:26.069751 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:35:26.069765 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-19 06:35:26.069779 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:35:26.069794 | orchestrator | 2026-02-19 06:35:26.069809 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:35:26.069823 | orchestrator | Thursday 19 February 2026 06:35:06 +0000 (0:00:02.205) 0:51:53.060 ***** 2026-02-19 06:35:26.069837 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:35:26.069851 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:35:26.069866 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:35:26.069879 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:35:26.069894 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:35:26.069908 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-19 06:35:26.069923 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:35:26.069936 | orchestrator | 2026-02-19 06:35:26.069950 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-19 06:35:26.069965 | orchestrator | Thursday 19 February 2026 06:35:09 +0000 (0:00:02.544) 0:51:55.604 ***** 2026-02-19 06:35:26.069979 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.069993 | orchestrator | 2026-02-19 06:35:26.070007 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:35:26.070110 | orchestrator | Thursday 19 February 2026 06:35:10 +0000 (0:00:01.153) 0:51:56.757 ***** 2026-02-19 06:35:26.070126 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-19 06:35:26.070152 | orchestrator | 2026-02-19 06:35:26.070168 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:35:26.070183 | orchestrator | Thursday 19 February 2026 06:35:11 +0000 (0:00:01.083) 0:51:57.840 ***** 2026-02-19 06:35:26.070198 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-19 06:35:26.070214 | orchestrator | 2026-02-19 06:35:26.070227 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:35:26.070243 | orchestrator | Thursday 19 February 2026 06:35:12 +0000 (0:00:01.192) 0:51:59.033 ***** 2026-02-19 06:35:26.070258 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.070272 | orchestrator | 2026-02-19 06:35:26.070287 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:35:26.070302 | orchestrator | Thursday 19 February 2026 06:35:13 +0000 (0:00:01.087) 0:52:00.121 ***** 2026-02-19 06:35:26.070317 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:35:26.070332 | orchestrator | 2026-02-19 06:35:26.070347 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:35:26.070362 | orchestrator | Thursday 19 February 2026 06:35:15 +0000 (0:00:01.514) 0:52:01.635 ***** 2026-02-19 06:35:26.070376 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:35:26.070391 | orchestrator | 2026-02-19 06:35:26.070406 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:35:26.070421 | orchestrator | Thursday 19 February 2026 06:35:16 +0000 (0:00:01.540) 0:52:03.176 ***** 2026-02-19 06:35:26.070435 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:35:26.070450 | orchestrator | 2026-02-19 06:35:26.070465 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:35:26.070480 | orchestrator | Thursday 19 February 2026 06:35:18 +0000 (0:00:01.489) 0:52:04.665 ***** 2026-02-19 06:35:26.070495 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.070509 | orchestrator | 2026-02-19 06:35:26.070524 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:35:26.070539 | orchestrator | Thursday 19 February 2026 06:35:19 +0000 (0:00:01.091) 0:52:05.757 ***** 2026-02-19 06:35:26.070553 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.070568 | orchestrator | 2026-02-19 06:35:26.070582 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:35:26.070597 | orchestrator | Thursday 19 February 2026 06:35:20 +0000 (0:00:01.186) 0:52:06.944 ***** 2026-02-19 06:35:26.070612 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.070626 | orchestrator | 2026-02-19 06:35:26.070641 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:35:26.070672 | orchestrator | Thursday 19 February 2026 06:35:21 +0000 (0:00:01.102) 0:52:08.047 ***** 2026-02-19 06:35:26.070688 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:35:26.070701 | orchestrator | 2026-02-19 06:35:26.070716 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:35:26.070730 | orchestrator | Thursday 19 February 2026 06:35:23 +0000 (0:00:01.568) 0:52:09.616 ***** 2026-02-19 06:35:26.070744 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:35:26.070758 | orchestrator | 2026-02-19 06:35:26.070772 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:35:26.070786 | orchestrator | Thursday 19 February 2026 06:35:24 +0000 (0:00:01.543) 0:52:11.159 ***** 2026-02-19 06:35:26.070800 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:35:26.070814 | orchestrator | 2026-02-19 06:35:26.070828 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:35:26.070855 | orchestrator | Thursday 19 February 2026 06:35:26 +0000 (0:00:01.099) 0:52:12.259 ***** 2026-02-19 06:36:13.607139 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607265 | orchestrator | 2026-02-19 06:36:13.607299 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:36:13.607313 | orchestrator | Thursday 19 February 2026 06:35:27 +0000 (0:00:01.091) 0:52:13.350 ***** 2026-02-19 06:36:13.607348 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.607361 | orchestrator | 2026-02-19 06:36:13.607372 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:36:13.607383 | orchestrator | Thursday 19 February 2026 06:35:28 +0000 (0:00:01.125) 0:52:14.476 ***** 2026-02-19 06:36:13.607394 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.607404 | orchestrator | 2026-02-19 06:36:13.607415 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:36:13.607426 | orchestrator | Thursday 19 February 2026 06:35:29 +0000 (0:00:01.124) 0:52:15.600 ***** 2026-02-19 06:36:13.607436 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.607447 | orchestrator | 2026-02-19 06:36:13.607458 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:36:13.607468 | orchestrator | Thursday 19 February 2026 06:35:30 +0000 (0:00:01.211) 0:52:16.812 ***** 2026-02-19 06:36:13.607479 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607490 | orchestrator | 2026-02-19 06:36:13.607500 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:36:13.607511 | orchestrator | Thursday 19 February 2026 06:35:31 +0000 (0:00:01.083) 0:52:17.896 ***** 2026-02-19 06:36:13.607522 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607532 | orchestrator | 2026-02-19 06:36:13.607543 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:36:13.607553 | orchestrator | Thursday 19 February 2026 06:35:32 +0000 (0:00:01.103) 0:52:19.000 ***** 2026-02-19 06:36:13.607564 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607575 | orchestrator | 2026-02-19 06:36:13.607585 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:36:13.607596 | orchestrator | Thursday 19 February 2026 06:35:33 +0000 (0:00:01.118) 0:52:20.119 ***** 2026-02-19 06:36:13.607606 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.607617 | orchestrator | 2026-02-19 06:36:13.607651 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:36:13.607664 | orchestrator | Thursday 19 February 2026 06:35:35 +0000 (0:00:01.118) 0:52:21.238 ***** 2026-02-19 06:36:13.607676 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.607689 | orchestrator | 2026-02-19 06:36:13.607701 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:36:13.607714 | orchestrator | Thursday 19 February 2026 06:35:36 +0000 (0:00:01.116) 0:52:22.355 ***** 2026-02-19 06:36:13.607726 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607738 | orchestrator | 2026-02-19 06:36:13.607751 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:36:13.607764 | orchestrator | Thursday 19 February 2026 06:35:37 +0000 (0:00:01.107) 0:52:23.462 ***** 2026-02-19 06:36:13.607777 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607789 | orchestrator | 2026-02-19 06:36:13.607800 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:36:13.607812 | orchestrator | Thursday 19 February 2026 06:35:38 +0000 (0:00:01.109) 0:52:24.572 ***** 2026-02-19 06:36:13.607825 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607836 | orchestrator | 2026-02-19 06:36:13.607849 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:36:13.607861 | orchestrator | Thursday 19 February 2026 06:35:39 +0000 (0:00:01.095) 0:52:25.667 ***** 2026-02-19 06:36:13.607873 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607885 | orchestrator | 2026-02-19 06:36:13.607897 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:36:13.607909 | orchestrator | Thursday 19 February 2026 06:35:40 +0000 (0:00:01.141) 0:52:26.809 ***** 2026-02-19 06:36:13.607930 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.607950 | orchestrator | 2026-02-19 06:36:13.607969 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:36:13.607990 | orchestrator | Thursday 19 February 2026 06:35:41 +0000 (0:00:01.122) 0:52:27.932 ***** 2026-02-19 06:36:13.608025 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608044 | orchestrator | 2026-02-19 06:36:13.608056 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:36:13.608066 | orchestrator | Thursday 19 February 2026 06:35:42 +0000 (0:00:01.101) 0:52:29.034 ***** 2026-02-19 06:36:13.608077 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608088 | orchestrator | 2026-02-19 06:36:13.608098 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:36:13.608110 | orchestrator | Thursday 19 February 2026 06:35:43 +0000 (0:00:01.109) 0:52:30.144 ***** 2026-02-19 06:36:13.608120 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608131 | orchestrator | 2026-02-19 06:36:13.608141 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:36:13.608152 | orchestrator | Thursday 19 February 2026 06:35:45 +0000 (0:00:01.112) 0:52:31.257 ***** 2026-02-19 06:36:13.608162 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608173 | orchestrator | 2026-02-19 06:36:13.608183 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:36:13.608194 | orchestrator | Thursday 19 February 2026 06:35:46 +0000 (0:00:01.098) 0:52:32.355 ***** 2026-02-19 06:36:13.608204 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608215 | orchestrator | 2026-02-19 06:36:13.608225 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:36:13.608236 | orchestrator | Thursday 19 February 2026 06:35:47 +0000 (0:00:01.090) 0:52:33.445 ***** 2026-02-19 06:36:13.608246 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608257 | orchestrator | 2026-02-19 06:36:13.608267 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:36:13.608278 | orchestrator | Thursday 19 February 2026 06:35:48 +0000 (0:00:01.130) 0:52:34.576 ***** 2026-02-19 06:36:13.608288 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608299 | orchestrator | 2026-02-19 06:36:13.608327 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:36:13.608345 | orchestrator | Thursday 19 February 2026 06:35:49 +0000 (0:00:01.098) 0:52:35.675 ***** 2026-02-19 06:36:13.608357 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.608368 | orchestrator | 2026-02-19 06:36:13.608378 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:36:13.608389 | orchestrator | Thursday 19 February 2026 06:35:51 +0000 (0:00:01.942) 0:52:37.618 ***** 2026-02-19 06:36:13.608400 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.608411 | orchestrator | 2026-02-19 06:36:13.608422 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:36:13.608432 | orchestrator | Thursday 19 February 2026 06:35:53 +0000 (0:00:02.279) 0:52:39.898 ***** 2026-02-19 06:36:13.608443 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-19 06:36:13.608455 | orchestrator | 2026-02-19 06:36:13.608466 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:36:13.608476 | orchestrator | Thursday 19 February 2026 06:35:54 +0000 (0:00:01.183) 0:52:41.082 ***** 2026-02-19 06:36:13.608487 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608498 | orchestrator | 2026-02-19 06:36:13.608509 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:36:13.608519 | orchestrator | Thursday 19 February 2026 06:35:55 +0000 (0:00:01.139) 0:52:42.222 ***** 2026-02-19 06:36:13.608530 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608541 | orchestrator | 2026-02-19 06:36:13.608551 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:36:13.608562 | orchestrator | Thursday 19 February 2026 06:35:57 +0000 (0:00:01.111) 0:52:43.333 ***** 2026-02-19 06:36:13.608573 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:36:13.608584 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:36:13.608605 | orchestrator | 2026-02-19 06:36:13.608616 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:36:13.608646 | orchestrator | Thursday 19 February 2026 06:35:58 +0000 (0:00:01.874) 0:52:45.207 ***** 2026-02-19 06:36:13.608657 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.608668 | orchestrator | 2026-02-19 06:36:13.608679 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:36:13.608689 | orchestrator | Thursday 19 February 2026 06:36:00 +0000 (0:00:01.479) 0:52:46.686 ***** 2026-02-19 06:36:13.608700 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608711 | orchestrator | 2026-02-19 06:36:13.608722 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:36:13.608732 | orchestrator | Thursday 19 February 2026 06:36:01 +0000 (0:00:01.158) 0:52:47.845 ***** 2026-02-19 06:36:13.608743 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608754 | orchestrator | 2026-02-19 06:36:13.608764 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:36:13.608775 | orchestrator | Thursday 19 February 2026 06:36:02 +0000 (0:00:01.131) 0:52:48.977 ***** 2026-02-19 06:36:13.608786 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608796 | orchestrator | 2026-02-19 06:36:13.608807 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:36:13.608818 | orchestrator | Thursday 19 February 2026 06:36:03 +0000 (0:00:01.121) 0:52:50.098 ***** 2026-02-19 06:36:13.608828 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-19 06:36:13.608839 | orchestrator | 2026-02-19 06:36:13.608850 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:36:13.608860 | orchestrator | Thursday 19 February 2026 06:36:05 +0000 (0:00:01.149) 0:52:51.248 ***** 2026-02-19 06:36:13.608871 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:36:13.608882 | orchestrator | 2026-02-19 06:36:13.608893 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:36:13.608903 | orchestrator | Thursday 19 February 2026 06:36:06 +0000 (0:00:01.761) 0:52:53.009 ***** 2026-02-19 06:36:13.608914 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:36:13.608924 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:36:13.608935 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:36:13.608946 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608957 | orchestrator | 2026-02-19 06:36:13.608967 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:36:13.608978 | orchestrator | Thursday 19 February 2026 06:36:07 +0000 (0:00:01.123) 0:52:54.133 ***** 2026-02-19 06:36:13.608989 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.608999 | orchestrator | 2026-02-19 06:36:13.609010 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:36:13.609021 | orchestrator | Thursday 19 February 2026 06:36:09 +0000 (0:00:01.100) 0:52:55.233 ***** 2026-02-19 06:36:13.609031 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.609042 | orchestrator | 2026-02-19 06:36:13.609052 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:36:13.609063 | orchestrator | Thursday 19 February 2026 06:36:10 +0000 (0:00:01.173) 0:52:56.407 ***** 2026-02-19 06:36:13.609074 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.609084 | orchestrator | 2026-02-19 06:36:13.609095 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:36:13.609105 | orchestrator | Thursday 19 February 2026 06:36:11 +0000 (0:00:01.142) 0:52:57.550 ***** 2026-02-19 06:36:13.609116 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.609127 | orchestrator | 2026-02-19 06:36:13.609138 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:36:13.609173 | orchestrator | Thursday 19 February 2026 06:36:12 +0000 (0:00:01.150) 0:52:58.700 ***** 2026-02-19 06:36:13.609191 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:36:13.609202 | orchestrator | 2026-02-19 06:36:13.609220 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:37:04.493659 | orchestrator | Thursday 19 February 2026 06:36:13 +0000 (0:00:01.119) 0:52:59.819 ***** 2026-02-19 06:37:04.493829 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:37:04.493858 | orchestrator | 2026-02-19 06:37:04.493881 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:37:04.493903 | orchestrator | Thursday 19 February 2026 06:36:16 +0000 (0:00:02.615) 0:53:02.435 ***** 2026-02-19 06:37:04.493924 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:37:04.493945 | orchestrator | 2026-02-19 06:37:04.493964 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:37:04.493983 | orchestrator | Thursday 19 February 2026 06:36:17 +0000 (0:00:01.108) 0:53:03.544 ***** 2026-02-19 06:37:04.494003 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-19 06:37:04.494091 | orchestrator | 2026-02-19 06:37:04.494108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:37:04.494122 | orchestrator | Thursday 19 February 2026 06:36:18 +0000 (0:00:01.133) 0:53:04.678 ***** 2026-02-19 06:37:04.494134 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.494148 | orchestrator | 2026-02-19 06:37:04.494161 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:37:04.494174 | orchestrator | Thursday 19 February 2026 06:36:19 +0000 (0:00:01.163) 0:53:05.841 ***** 2026-02-19 06:37:04.494186 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.494198 | orchestrator | 2026-02-19 06:37:04.494211 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:37:04.494224 | orchestrator | Thursday 19 February 2026 06:36:20 +0000 (0:00:01.156) 0:53:06.998 ***** 2026-02-19 06:37:04.494236 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.494249 | orchestrator | 2026-02-19 06:37:04.494261 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:37:04.494274 | orchestrator | Thursday 19 February 2026 06:36:21 +0000 (0:00:01.134) 0:53:08.132 ***** 2026-02-19 06:37:04.494286 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.494299 | orchestrator | 2026-02-19 06:37:04.494311 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:37:04.494324 | orchestrator | Thursday 19 February 2026 06:36:23 +0000 (0:00:01.104) 0:53:09.237 ***** 2026-02-19 06:37:04.494337 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.494348 | orchestrator | 2026-02-19 06:37:04.494359 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:37:04.494370 | orchestrator | Thursday 19 February 2026 06:36:24 +0000 (0:00:01.130) 0:53:10.368 ***** 2026-02-19 06:37:04.494381 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.494392 | orchestrator | 2026-02-19 06:37:04.494402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:37:04.494413 | orchestrator | Thursday 19 February 2026 06:36:25 +0000 (0:00:01.123) 0:53:11.491 ***** 2026-02-19 06:37:04.494424 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.494435 | orchestrator | 2026-02-19 06:37:04.494446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:37:04.494456 | orchestrator | Thursday 19 February 2026 06:36:26 +0000 (0:00:01.159) 0:53:12.651 ***** 2026-02-19 06:37:04.494467 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.494478 | orchestrator | 2026-02-19 06:37:04.494489 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:37:04.494499 | orchestrator | Thursday 19 February 2026 06:36:27 +0000 (0:00:01.137) 0:53:13.788 ***** 2026-02-19 06:37:04.494510 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:37:04.494521 | orchestrator | 2026-02-19 06:37:04.494532 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:37:04.494571 | orchestrator | Thursday 19 February 2026 06:36:28 +0000 (0:00:01.141) 0:53:14.929 ***** 2026-02-19 06:37:04.494583 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-19 06:37:04.494594 | orchestrator | 2026-02-19 06:37:04.494625 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:37:04.494636 | orchestrator | Thursday 19 February 2026 06:36:29 +0000 (0:00:01.103) 0:53:16.033 ***** 2026-02-19 06:37:04.494648 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-19 06:37:04.494659 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-19 06:37:04.494670 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-19 06:37:04.494681 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-19 06:37:04.494691 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-19 06:37:04.494702 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-19 06:37:04.494713 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-19 06:37:04.494723 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:37:04.494735 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:37:04.494746 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:37:04.494756 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:37:04.494767 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:37:04.494778 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:37:04.494789 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:37:04.494800 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-19 06:37:04.494810 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-19 06:37:04.494821 | orchestrator | 2026-02-19 06:37:04.494832 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:37:04.494843 | orchestrator | Thursday 19 February 2026 06:36:36 +0000 (0:00:06.854) 0:53:22.888 ***** 2026-02-19 06:37:04.494853 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-19 06:37:04.494864 | orchestrator | 2026-02-19 06:37:04.494906 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 06:37:04.494919 | orchestrator | Thursday 19 February 2026 06:36:37 +0000 (0:00:01.107) 0:53:23.996 ***** 2026-02-19 06:37:04.494930 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:37:04.494942 | orchestrator | 2026-02-19 06:37:04.494953 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 06:37:04.494964 | orchestrator | Thursday 19 February 2026 06:36:39 +0000 (0:00:01.513) 0:53:25.509 ***** 2026-02-19 06:37:04.494975 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:37:04.494986 | orchestrator | 2026-02-19 06:37:04.494997 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:37:04.495007 | orchestrator | Thursday 19 February 2026 06:36:41 +0000 (0:00:02.006) 0:53:27.516 ***** 2026-02-19 06:37:04.495018 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495029 | orchestrator | 2026-02-19 06:37:04.495040 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:37:04.495051 | orchestrator | Thursday 19 February 2026 06:36:42 +0000 (0:00:01.126) 0:53:28.642 ***** 2026-02-19 06:37:04.495061 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495072 | orchestrator | 2026-02-19 06:37:04.495083 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:37:04.495094 | orchestrator | Thursday 19 February 2026 06:36:43 +0000 (0:00:01.113) 0:53:29.755 ***** 2026-02-19 06:37:04.495114 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495125 | orchestrator | 2026-02-19 06:37:04.495136 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:37:04.495147 | orchestrator | Thursday 19 February 2026 06:36:44 +0000 (0:00:01.177) 0:53:30.932 ***** 2026-02-19 06:37:04.495157 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495168 | orchestrator | 2026-02-19 06:37:04.495179 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:37:04.495190 | orchestrator | Thursday 19 February 2026 06:36:45 +0000 (0:00:01.175) 0:53:32.108 ***** 2026-02-19 06:37:04.495200 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495211 | orchestrator | 2026-02-19 06:37:04.495222 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:37:04.495233 | orchestrator | Thursday 19 February 2026 06:36:46 +0000 (0:00:01.113) 0:53:33.221 ***** 2026-02-19 06:37:04.495244 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495255 | orchestrator | 2026-02-19 06:37:04.495266 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:37:04.495277 | orchestrator | Thursday 19 February 2026 06:36:48 +0000 (0:00:01.136) 0:53:34.358 ***** 2026-02-19 06:37:04.495287 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495298 | orchestrator | 2026-02-19 06:37:04.495309 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:37:04.495320 | orchestrator | Thursday 19 February 2026 06:36:49 +0000 (0:00:01.119) 0:53:35.478 ***** 2026-02-19 06:37:04.495331 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495341 | orchestrator | 2026-02-19 06:37:04.495352 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:37:04.495363 | orchestrator | Thursday 19 February 2026 06:36:50 +0000 (0:00:01.161) 0:53:36.640 ***** 2026-02-19 06:37:04.495374 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495384 | orchestrator | 2026-02-19 06:37:04.495395 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:37:04.495406 | orchestrator | Thursday 19 February 2026 06:36:51 +0000 (0:00:01.098) 0:53:37.738 ***** 2026-02-19 06:37:04.495417 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495427 | orchestrator | 2026-02-19 06:37:04.495438 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:37:04.495449 | orchestrator | Thursday 19 February 2026 06:36:52 +0000 (0:00:01.121) 0:53:38.860 ***** 2026-02-19 06:37:04.495460 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:37:04.495470 | orchestrator | 2026-02-19 06:37:04.495481 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:37:04.495492 | orchestrator | Thursday 19 February 2026 06:36:53 +0000 (0:00:01.109) 0:53:39.969 ***** 2026-02-19 06:37:04.495503 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:37:04.495514 | orchestrator | 2026-02-19 06:37:04.495524 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:37:04.495535 | orchestrator | Thursday 19 February 2026 06:36:58 +0000 (0:00:04.561) 0:53:44.530 ***** 2026-02-19 06:37:04.495546 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:37:04.495557 | orchestrator | 2026-02-19 06:37:04.495567 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:37:04.495578 | orchestrator | Thursday 19 February 2026 06:36:59 +0000 (0:00:01.165) 0:53:45.696 ***** 2026-02-19 06:37:04.495591 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-19 06:37:04.495658 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-19 06:38:02.365632 | orchestrator | 2026-02-19 06:38:02.365757 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:38:02.365774 | orchestrator | Thursday 19 February 2026 06:37:04 +0000 (0:00:05.012) 0:53:50.708 ***** 2026-02-19 06:38:02.365786 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.365798 | orchestrator | 2026-02-19 06:38:02.365810 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:38:02.365822 | orchestrator | Thursday 19 February 2026 06:37:05 +0000 (0:00:01.086) 0:53:51.795 ***** 2026-02-19 06:38:02.365834 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.365846 | orchestrator | 2026-02-19 06:38:02.365859 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:38:02.365872 | orchestrator | Thursday 19 February 2026 06:37:06 +0000 (0:00:01.133) 0:53:52.929 ***** 2026-02-19 06:38:02.365884 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.365894 | orchestrator | 2026-02-19 06:38:02.365906 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:38:02.365918 | orchestrator | Thursday 19 February 2026 06:37:07 +0000 (0:00:01.188) 0:53:54.117 ***** 2026-02-19 06:38:02.365929 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.365940 | orchestrator | 2026-02-19 06:38:02.365951 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:38:02.365963 | orchestrator | Thursday 19 February 2026 06:37:09 +0000 (0:00:01.123) 0:53:55.241 ***** 2026-02-19 06:38:02.365974 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.365985 | orchestrator | 2026-02-19 06:38:02.365996 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:38:02.366007 | orchestrator | Thursday 19 February 2026 06:37:10 +0000 (0:00:01.118) 0:53:56.360 ***** 2026-02-19 06:38:02.366071 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.366083 | orchestrator | 2026-02-19 06:38:02.366093 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:38:02.366104 | orchestrator | Thursday 19 February 2026 06:37:11 +0000 (0:00:01.237) 0:53:57.597 ***** 2026-02-19 06:38:02.366116 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:38:02.366127 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:38:02.366137 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:38:02.366148 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.366158 | orchestrator | 2026-02-19 06:38:02.366169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:38:02.366179 | orchestrator | Thursday 19 February 2026 06:37:12 +0000 (0:00:01.402) 0:53:58.999 ***** 2026-02-19 06:38:02.366190 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:38:02.366201 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:38:02.366211 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:38:02.366222 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.366232 | orchestrator | 2026-02-19 06:38:02.366242 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:38:02.366254 | orchestrator | Thursday 19 February 2026 06:37:14 +0000 (0:00:01.386) 0:54:00.386 ***** 2026-02-19 06:38:02.366264 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:38:02.366275 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:38:02.366286 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:38:02.366297 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.366350 | orchestrator | 2026-02-19 06:38:02.366362 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:38:02.366374 | orchestrator | Thursday 19 February 2026 06:37:15 +0000 (0:00:01.435) 0:54:01.822 ***** 2026-02-19 06:38:02.366386 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.366397 | orchestrator | 2026-02-19 06:38:02.366408 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:38:02.366419 | orchestrator | Thursday 19 February 2026 06:37:16 +0000 (0:00:01.138) 0:54:02.960 ***** 2026-02-19 06:38:02.366430 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 06:38:02.366442 | orchestrator | 2026-02-19 06:38:02.366453 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:38:02.366464 | orchestrator | Thursday 19 February 2026 06:37:18 +0000 (0:00:01.317) 0:54:04.278 ***** 2026-02-19 06:38:02.366476 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.366487 | orchestrator | 2026-02-19 06:38:02.366498 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-19 06:38:02.366509 | orchestrator | Thursday 19 February 2026 06:37:19 +0000 (0:00:01.723) 0:54:06.002 ***** 2026-02-19 06:38:02.366521 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.366532 | orchestrator | 2026-02-19 06:38:02.366543 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-19 06:38:02.366554 | orchestrator | Thursday 19 February 2026 06:37:20 +0000 (0:00:01.126) 0:54:07.128 ***** 2026-02-19 06:38:02.366565 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-02-19 06:38:02.366592 | orchestrator | 2026-02-19 06:38:02.366602 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-19 06:38:02.366613 | orchestrator | Thursday 19 February 2026 06:37:22 +0000 (0:00:01.575) 0:54:08.704 ***** 2026-02-19 06:38:02.366624 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-19 06:38:02.366635 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-19 06:38:02.366645 | orchestrator | 2026-02-19 06:38:02.366656 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-19 06:38:02.366681 | orchestrator | Thursday 19 February 2026 06:37:24 +0000 (0:00:01.863) 0:54:10.568 ***** 2026-02-19 06:38:02.366692 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:38:02.366702 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 06:38:02.366731 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:38:02.366743 | orchestrator | 2026-02-19 06:38:02.366754 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:38:02.366765 | orchestrator | Thursday 19 February 2026 06:37:27 +0000 (0:00:03.400) 0:54:13.968 ***** 2026-02-19 06:38:02.366775 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-19 06:38:02.366785 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 06:38:02.366796 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.366806 | orchestrator | 2026-02-19 06:38:02.366817 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-19 06:38:02.366827 | orchestrator | Thursday 19 February 2026 06:37:29 +0000 (0:00:01.983) 0:54:15.951 ***** 2026-02-19 06:38:02.366838 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.366849 | orchestrator | 2026-02-19 06:38:02.366859 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-19 06:38:02.366869 | orchestrator | Thursday 19 February 2026 06:37:31 +0000 (0:00:01.541) 0:54:17.493 ***** 2026-02-19 06:38:02.366879 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.366888 | orchestrator | 2026-02-19 06:38:02.366897 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-19 06:38:02.366906 | orchestrator | Thursday 19 February 2026 06:37:32 +0000 (0:00:01.110) 0:54:18.604 ***** 2026-02-19 06:38:02.366915 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-02-19 06:38:02.366934 | orchestrator | 2026-02-19 06:38:02.366943 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-19 06:38:02.366952 | orchestrator | Thursday 19 February 2026 06:37:33 +0000 (0:00:01.461) 0:54:20.066 ***** 2026-02-19 06:38:02.366961 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-02-19 06:38:02.366970 | orchestrator | 2026-02-19 06:38:02.366979 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-19 06:38:02.366989 | orchestrator | Thursday 19 February 2026 06:37:35 +0000 (0:00:01.454) 0:54:21.520 ***** 2026-02-19 06:38:02.366998 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.367007 | orchestrator | 2026-02-19 06:38:02.367017 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-19 06:38:02.367026 | orchestrator | Thursday 19 February 2026 06:37:37 +0000 (0:00:02.050) 0:54:23.570 ***** 2026-02-19 06:38:02.367035 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.367044 | orchestrator | 2026-02-19 06:38:02.367053 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-19 06:38:02.367062 | orchestrator | Thursday 19 February 2026 06:37:39 +0000 (0:00:01.928) 0:54:25.499 ***** 2026-02-19 06:38:02.367071 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.367081 | orchestrator | 2026-02-19 06:38:02.367090 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-19 06:38:02.367100 | orchestrator | Thursday 19 February 2026 06:37:41 +0000 (0:00:02.220) 0:54:27.720 ***** 2026-02-19 06:38:02.367109 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.367118 | orchestrator | 2026-02-19 06:38:02.367127 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-19 06:38:02.367136 | orchestrator | Thursday 19 February 2026 06:37:43 +0000 (0:00:02.276) 0:54:29.996 ***** 2026-02-19 06:38:02.367145 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.367153 | orchestrator | 2026-02-19 06:38:02.367163 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-19 06:38:02.367173 | orchestrator | Thursday 19 February 2026 06:37:45 +0000 (0:00:01.608) 0:54:31.605 ***** 2026-02-19 06:38:02.367182 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:38:02.367192 | orchestrator | 2026-02-19 06:38:02.367200 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-19 06:38:02.367209 | orchestrator | Thursday 19 February 2026 06:37:46 +0000 (0:00:01.112) 0:54:32.717 ***** 2026-02-19 06:38:02.367218 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:38:02.367228 | orchestrator | 2026-02-19 06:38:02.367237 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-19 06:38:02.367245 | orchestrator | 2026-02-19 06:38:02.367254 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:38:02.367263 | orchestrator | Thursday 19 February 2026 06:37:54 +0000 (0:00:07.512) 0:54:40.230 ***** 2026-02-19 06:38:02.367273 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4 2026-02-19 06:38:02.367282 | orchestrator | 2026-02-19 06:38:02.367291 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:38:02.367301 | orchestrator | Thursday 19 February 2026 06:37:55 +0000 (0:00:01.204) 0:54:41.434 ***** 2026-02-19 06:38:02.367310 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:02.367319 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:02.367328 | orchestrator | 2026-02-19 06:38:02.367336 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:38:02.367346 | orchestrator | Thursday 19 February 2026 06:37:56 +0000 (0:00:01.641) 0:54:43.076 ***** 2026-02-19 06:38:02.367354 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:02.367364 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:02.367372 | orchestrator | 2026-02-19 06:38:02.367381 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:38:02.367391 | orchestrator | Thursday 19 February 2026 06:37:58 +0000 (0:00:01.506) 0:54:44.583 ***** 2026-02-19 06:38:02.367400 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:02.367415 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:02.367425 | orchestrator | 2026-02-19 06:38:02.367434 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:38:02.367443 | orchestrator | Thursday 19 February 2026 06:37:59 +0000 (0:00:01.543) 0:54:46.127 ***** 2026-02-19 06:38:02.367452 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:02.367461 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:02.367471 | orchestrator | 2026-02-19 06:38:02.367485 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:38:02.367494 | orchestrator | Thursday 19 February 2026 06:38:01 +0000 (0:00:01.221) 0:54:47.349 ***** 2026-02-19 06:38:02.367503 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:02.367517 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:24.711471 | orchestrator | 2026-02-19 06:38:24.711692 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:38:24.711729 | orchestrator | Thursday 19 February 2026 06:38:02 +0000 (0:00:01.229) 0:54:48.578 ***** 2026-02-19 06:38:24.711750 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:24.711770 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:24.711789 | orchestrator | 2026-02-19 06:38:24.711808 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:38:24.711828 | orchestrator | Thursday 19 February 2026 06:38:03 +0000 (0:00:01.225) 0:54:49.803 ***** 2026-02-19 06:38:24.711849 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:24.711869 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:38:24.711889 | orchestrator | 2026-02-19 06:38:24.711904 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:38:24.711915 | orchestrator | Thursday 19 February 2026 06:38:04 +0000 (0:00:01.208) 0:54:51.011 ***** 2026-02-19 06:38:24.711926 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:24.711937 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:24.711948 | orchestrator | 2026-02-19 06:38:24.711959 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:38:24.711971 | orchestrator | Thursday 19 February 2026 06:38:05 +0000 (0:00:01.209) 0:54:52.222 ***** 2026-02-19 06:38:24.711982 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:38:24.711996 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:38:24.712009 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:38:24.712021 | orchestrator | 2026-02-19 06:38:24.712034 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:38:24.712047 | orchestrator | Thursday 19 February 2026 06:38:07 +0000 (0:00:01.621) 0:54:53.843 ***** 2026-02-19 06:38:24.712059 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:24.712071 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:24.712083 | orchestrator | 2026-02-19 06:38:24.712095 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:38:24.712108 | orchestrator | Thursday 19 February 2026 06:38:09 +0000 (0:00:01.395) 0:54:55.238 ***** 2026-02-19 06:38:24.712120 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:38:24.712132 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:38:24.712145 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:38:24.712157 | orchestrator | 2026-02-19 06:38:24.712169 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:38:24.712183 | orchestrator | Thursday 19 February 2026 06:38:11 +0000 (0:00:02.947) 0:54:58.186 ***** 2026-02-19 06:38:24.712195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 06:38:24.712208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 06:38:24.712221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 06:38:24.712233 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:24.712303 | orchestrator | 2026-02-19 06:38:24.712315 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:38:24.712326 | orchestrator | Thursday 19 February 2026 06:38:13 +0000 (0:00:01.391) 0:54:59.577 ***** 2026-02-19 06:38:24.712340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:38:24.712354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:38:24.712366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:38:24.712377 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:24.712388 | orchestrator | 2026-02-19 06:38:24.712399 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:38:24.712410 | orchestrator | Thursday 19 February 2026 06:38:14 +0000 (0:00:01.593) 0:55:01.171 ***** 2026-02-19 06:38:24.712424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:24.712475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:24.712489 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:24.712500 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:24.712511 | orchestrator | 2026-02-19 06:38:24.712522 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:38:24.712533 | orchestrator | Thursday 19 February 2026 06:38:16 +0000 (0:00:01.156) 0:55:02.328 ***** 2026-02-19 06:38:24.712547 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:38:09.557313', 'end': '2026-02-19 06:38:09.604022', 'delta': '0:00:00.046709', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:38:24.712596 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:38:10.144789', 'end': '2026-02-19 06:38:10.197068', 'delta': '0:00:00.052279', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:38:24.712623 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:38:10.730673', 'end': '2026-02-19 06:38:10.779911', 'delta': '0:00:00.049238', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:38:24.712635 | orchestrator | 2026-02-19 06:38:24.712646 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:38:24.712657 | orchestrator | Thursday 19 February 2026 06:38:17 +0000 (0:00:01.138) 0:55:03.467 ***** 2026-02-19 06:38:24.712668 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:24.712679 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:24.712690 | orchestrator | 2026-02-19 06:38:24.712701 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:38:24.712712 | orchestrator | Thursday 19 February 2026 06:38:18 +0000 (0:00:01.394) 0:55:04.862 ***** 2026-02-19 06:38:24.712722 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:24.712733 | orchestrator | 2026-02-19 06:38:24.712744 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:38:24.712755 | orchestrator | Thursday 19 February 2026 06:38:19 +0000 (0:00:01.251) 0:55:06.114 ***** 2026-02-19 06:38:24.712766 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:24.712777 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:24.712788 | orchestrator | 2026-02-19 06:38:24.712799 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:38:24.712810 | orchestrator | Thursday 19 February 2026 06:38:21 +0000 (0:00:01.311) 0:55:07.426 ***** 2026-02-19 06:38:24.712821 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:38:24.712832 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:38:24.712843 | orchestrator | 2026-02-19 06:38:24.712860 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:38:24.712871 | orchestrator | Thursday 19 February 2026 06:38:23 +0000 (0:00:02.248) 0:55:09.675 ***** 2026-02-19 06:38:24.712882 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:24.712900 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:36.218156 | orchestrator | 2026-02-19 06:38:36.218251 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:38:36.218263 | orchestrator | Thursday 19 February 2026 06:38:24 +0000 (0:00:01.244) 0:55:10.919 ***** 2026-02-19 06:38:36.218271 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:36.218278 | orchestrator | 2026-02-19 06:38:36.218285 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:38:36.218292 | orchestrator | Thursday 19 February 2026 06:38:25 +0000 (0:00:01.133) 0:55:12.053 ***** 2026-02-19 06:38:36.218298 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:36.218304 | orchestrator | 2026-02-19 06:38:36.218311 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:38:36.218317 | orchestrator | Thursday 19 February 2026 06:38:27 +0000 (0:00:01.211) 0:55:13.265 ***** 2026-02-19 06:38:36.218341 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:36.218348 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:38:36.218354 | orchestrator | 2026-02-19 06:38:36.218361 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:38:36.218367 | orchestrator | Thursday 19 February 2026 06:38:28 +0000 (0:00:01.192) 0:55:14.458 ***** 2026-02-19 06:38:36.218373 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:36.218379 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:38:36.218385 | orchestrator | 2026-02-19 06:38:36.218392 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:38:36.218398 | orchestrator | Thursday 19 February 2026 06:38:29 +0000 (0:00:01.187) 0:55:15.645 ***** 2026-02-19 06:38:36.218405 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:36.218412 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:36.218418 | orchestrator | 2026-02-19 06:38:36.218424 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:38:36.218430 | orchestrator | Thursday 19 February 2026 06:38:30 +0000 (0:00:01.372) 0:55:17.017 ***** 2026-02-19 06:38:36.218436 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:36.218442 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:38:36.218448 | orchestrator | 2026-02-19 06:38:36.218454 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:38:36.218461 | orchestrator | Thursday 19 February 2026 06:38:32 +0000 (0:00:01.498) 0:55:18.516 ***** 2026-02-19 06:38:36.218467 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:36.218473 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:36.218479 | orchestrator | 2026-02-19 06:38:36.218485 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:38:36.218500 | orchestrator | Thursday 19 February 2026 06:38:33 +0000 (0:00:01.238) 0:55:19.754 ***** 2026-02-19 06:38:36.218507 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:36.218513 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:38:36.218519 | orchestrator | 2026-02-19 06:38:36.218525 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:38:36.218532 | orchestrator | Thursday 19 February 2026 06:38:34 +0000 (0:00:01.177) 0:55:20.932 ***** 2026-02-19 06:38:36.218538 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:38:36.218544 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:38:36.218550 | orchestrator | 2026-02-19 06:38:36.218603 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:38:36.218609 | orchestrator | Thursday 19 February 2026 06:38:35 +0000 (0:00:01.242) 0:55:22.174 ***** 2026-02-19 06:38:36.218617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.218627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'uuids': ['76bd5aba-0bb7-430d-953d-ee2f2591c83e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV']}})  2026-02-19 06:38:36.218648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50533a39', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:38:36.218678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159']}})  2026-02-19 06:38:36.218687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.218694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.218702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:38:36.218709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.218716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl', 'dm-uuid-CRYPT-LUKS2-96c3bdbb8dfb4f8d89601607ffc96021-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:38:36.218723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.218744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'uuids': ['96c3bdbb-8dfb-4f8d-8960-1607ffc96021'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl']}})  2026-02-19 06:38:36.333839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.333941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542']}})  2026-02-19 06:38:36.333964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'uuids': ['a59715b7-019c-4dda-9336-d3b7804a06c1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf']}})  2026-02-19 06:38:36.333981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.333998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85ad02dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:38:36.334129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23a82e55', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:38:36.334146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.334155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02']}})  2026-02-19 06:38:36.334166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.334183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:36.334211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV', 'dm-uuid-CRYPT-LUKS2-76bd5aba0bb7430d953dee2f2591c83e-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:38:36.334244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:37.595994 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:37.596112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-20-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:38:37.596135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:37.596150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww', 'dm-uuid-CRYPT-LUKS2-f68538a13fa347dc9b85a13ec62262c1-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:38:37.596163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:37.596177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'uuids': ['f68538a1-3fa3-47dc-9b85-a13ec62262c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww']}})  2026-02-19 06:38:37.596222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160']}})  2026-02-19 06:38:37.596247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:37.596279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '28e9d7a7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:38:37.596288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:37.596303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:38:37.596311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf', 'dm-uuid-CRYPT-LUKS2-a59715b7019c4dda9336d3b7804a06c1-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:38:37.596319 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:38:37.596327 | orchestrator | 2026-02-19 06:38:37.596336 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:38:37.596344 | orchestrator | Thursday 19 February 2026 06:38:37 +0000 (0:00:01.534) 0:55:23.709 ***** 2026-02-19 06:38:37.596362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'uuids': ['76bd5aba-0bb7-430d-953d-ee2f2591c83e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50533a39', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708520 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708644 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl', 'dm-uuid-CRYPT-LUKS2-96c3bdbb8dfb4f8d89601607ffc96021-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.708679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'uuids': ['96c3bdbb-8dfb-4f8d-8960-1607ffc96021'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770407 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'uuids': ['a59715b7-019c-4dda-9336-d3b7804a06c1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85ad02dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23a82e55', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770664 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.770736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882169 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-20-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882214 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV', 'dm-uuid-CRYPT-LUKS2-76bd5aba0bb7430d953dee2f2591c83e-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882227 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882240 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:38:37.882269 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww', 'dm-uuid-CRYPT-LUKS2-f68538a13fa347dc9b85a13ec62262c1-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882313 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'uuids': ['f68538a1-3fa3-47dc-9b85-a13ec62262c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882326 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:38:37.882377 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '28e9d7a7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:39:06.482766 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:39:06.482915 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:39:06.482934 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf', 'dm-uuid-CRYPT-LUKS2-a59715b7019c4dda9336d3b7804a06c1-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:39:06.482948 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:06.482963 | orchestrator | 2026-02-19 06:39:06.482976 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:39:06.482988 | orchestrator | Thursday 19 February 2026 06:38:39 +0000 (0:00:01.553) 0:55:25.263 ***** 2026-02-19 06:39:06.483000 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:06.483011 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:06.483022 | orchestrator | 2026-02-19 06:39:06.483034 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:39:06.483045 | orchestrator | Thursday 19 February 2026 06:38:40 +0000 (0:00:01.641) 0:55:26.904 ***** 2026-02-19 06:39:06.483055 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:06.483066 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:06.483077 | orchestrator | 2026-02-19 06:39:06.483103 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:39:06.483114 | orchestrator | Thursday 19 February 2026 06:38:41 +0000 (0:00:01.199) 0:55:28.104 ***** 2026-02-19 06:39:06.483125 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:06.483136 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:06.483147 | orchestrator | 2026-02-19 06:39:06.483159 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:39:06.483170 | orchestrator | Thursday 19 February 2026 06:38:43 +0000 (0:00:01.580) 0:55:29.684 ***** 2026-02-19 06:39:06.483181 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.483192 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:06.483203 | orchestrator | 2026-02-19 06:39:06.483213 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:39:06.483224 | orchestrator | Thursday 19 February 2026 06:38:44 +0000 (0:00:01.188) 0:55:30.873 ***** 2026-02-19 06:39:06.483235 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.483246 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:06.483257 | orchestrator | 2026-02-19 06:39:06.483268 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:39:06.483279 | orchestrator | Thursday 19 February 2026 06:38:45 +0000 (0:00:01.324) 0:55:32.197 ***** 2026-02-19 06:39:06.483299 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.483312 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:06.483325 | orchestrator | 2026-02-19 06:39:06.483338 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:39:06.483352 | orchestrator | Thursday 19 February 2026 06:38:47 +0000 (0:00:01.211) 0:55:33.409 ***** 2026-02-19 06:39:06.483364 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-19 06:39:06.483377 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-19 06:39:06.483389 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-19 06:39:06.483402 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-19 06:39:06.483415 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-19 06:39:06.483427 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-19 06:39:06.483439 | orchestrator | 2026-02-19 06:39:06.483452 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:39:06.483465 | orchestrator | Thursday 19 February 2026 06:38:49 +0000 (0:00:02.058) 0:55:35.468 ***** 2026-02-19 06:39:06.483495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 06:39:06.483509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 06:39:06.483522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 06:39:06.483534 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.483572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 06:39:06.483584 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 06:39:06.483597 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 06:39:06.483609 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:06.483621 | orchestrator | 2026-02-19 06:39:06.483635 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:39:06.483647 | orchestrator | Thursday 19 February 2026 06:38:50 +0000 (0:00:01.614) 0:55:37.083 ***** 2026-02-19 06:39:06.483658 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4 2026-02-19 06:39:06.483670 | orchestrator | 2026-02-19 06:39:06.483681 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:39:06.483692 | orchestrator | Thursday 19 February 2026 06:38:52 +0000 (0:00:01.211) 0:55:38.295 ***** 2026-02-19 06:39:06.483704 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.483714 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:06.483725 | orchestrator | 2026-02-19 06:39:06.483736 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:39:06.483747 | orchestrator | Thursday 19 February 2026 06:38:53 +0000 (0:00:01.230) 0:55:39.525 ***** 2026-02-19 06:39:06.483758 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.483768 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:06.483779 | orchestrator | 2026-02-19 06:39:06.483790 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:39:06.483801 | orchestrator | Thursday 19 February 2026 06:38:54 +0000 (0:00:01.253) 0:55:40.779 ***** 2026-02-19 06:39:06.483812 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.483823 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:06.483834 | orchestrator | 2026-02-19 06:39:06.483844 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:39:06.483855 | orchestrator | Thursday 19 February 2026 06:38:55 +0000 (0:00:01.210) 0:55:41.989 ***** 2026-02-19 06:39:06.483866 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:06.483877 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:06.483888 | orchestrator | 2026-02-19 06:39:06.483899 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:39:06.483910 | orchestrator | Thursday 19 February 2026 06:38:57 +0000 (0:00:01.385) 0:55:43.374 ***** 2026-02-19 06:39:06.483929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:39:06.483940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:39:06.483950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:39:06.483961 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.483972 | orchestrator | 2026-02-19 06:39:06.483983 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:39:06.483994 | orchestrator | Thursday 19 February 2026 06:38:58 +0000 (0:00:01.716) 0:55:45.091 ***** 2026-02-19 06:39:06.484004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:39:06.484015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:39:06.484026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:39:06.484043 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.484054 | orchestrator | 2026-02-19 06:39:06.484065 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:39:06.484076 | orchestrator | Thursday 19 February 2026 06:39:00 +0000 (0:00:01.394) 0:55:46.486 ***** 2026-02-19 06:39:06.484087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:39:06.484098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:39:06.484108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:39:06.484119 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:06.484130 | orchestrator | 2026-02-19 06:39:06.484141 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:39:06.484151 | orchestrator | Thursday 19 February 2026 06:39:01 +0000 (0:00:01.407) 0:55:47.893 ***** 2026-02-19 06:39:06.484162 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:06.484173 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:06.484184 | orchestrator | 2026-02-19 06:39:06.484195 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:39:06.484206 | orchestrator | Thursday 19 February 2026 06:39:02 +0000 (0:00:01.241) 0:55:49.135 ***** 2026-02-19 06:39:06.484217 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 06:39:06.484227 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 06:39:06.484238 | orchestrator | 2026-02-19 06:39:06.484249 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:39:06.484260 | orchestrator | Thursday 19 February 2026 06:39:04 +0000 (0:00:01.452) 0:55:50.587 ***** 2026-02-19 06:39:06.484271 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:39:06.484281 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:39:06.484292 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:39:06.484303 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 06:39:06.484314 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:39:06.484325 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:39:06.484343 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:39:49.442551 | orchestrator | 2026-02-19 06:39:49.442684 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:39:49.442703 | orchestrator | Thursday 19 February 2026 06:39:06 +0000 (0:00:02.103) 0:55:52.690 ***** 2026-02-19 06:39:49.442715 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:39:49.442727 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:39:49.442739 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:39:49.442752 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 06:39:49.442764 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:39:49.442801 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:39:49.442814 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:39:49.442825 | orchestrator | 2026-02-19 06:39:49.442836 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-19 06:39:49.442847 | orchestrator | Thursday 19 February 2026 06:39:08 +0000 (0:00:02.530) 0:55:55.221 ***** 2026-02-19 06:39:49.442859 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.442872 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.442883 | orchestrator | 2026-02-19 06:39:49.442895 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:39:49.442906 | orchestrator | Thursday 19 February 2026 06:39:10 +0000 (0:00:01.303) 0:55:56.525 ***** 2026-02-19 06:39:49.442917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4 2026-02-19 06:39:49.442929 | orchestrator | 2026-02-19 06:39:49.442940 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:39:49.442952 | orchestrator | Thursday 19 February 2026 06:39:11 +0000 (0:00:01.536) 0:55:58.062 ***** 2026-02-19 06:39:49.442963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4 2026-02-19 06:39:49.442974 | orchestrator | 2026-02-19 06:39:49.442986 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:39:49.442997 | orchestrator | Thursday 19 February 2026 06:39:13 +0000 (0:00:01.242) 0:55:59.304 ***** 2026-02-19 06:39:49.443008 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443019 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443030 | orchestrator | 2026-02-19 06:39:49.443042 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:39:49.443053 | orchestrator | Thursday 19 February 2026 06:39:14 +0000 (0:00:01.194) 0:56:00.499 ***** 2026-02-19 06:39:49.443064 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443076 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443088 | orchestrator | 2026-02-19 06:39:49.443100 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:39:49.443112 | orchestrator | Thursday 19 February 2026 06:39:15 +0000 (0:00:01.621) 0:56:02.120 ***** 2026-02-19 06:39:49.443123 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443135 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443145 | orchestrator | 2026-02-19 06:39:49.443156 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:39:49.443167 | orchestrator | Thursday 19 February 2026 06:39:17 +0000 (0:00:01.647) 0:56:03.768 ***** 2026-02-19 06:39:49.443179 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443207 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443221 | orchestrator | 2026-02-19 06:39:49.443233 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:39:49.443245 | orchestrator | Thursday 19 February 2026 06:39:19 +0000 (0:00:01.614) 0:56:05.382 ***** 2026-02-19 06:39:49.443256 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443267 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443279 | orchestrator | 2026-02-19 06:39:49.443291 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:39:49.443302 | orchestrator | Thursday 19 February 2026 06:39:20 +0000 (0:00:01.179) 0:56:06.562 ***** 2026-02-19 06:39:49.443313 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443324 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443334 | orchestrator | 2026-02-19 06:39:49.443346 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:39:49.443356 | orchestrator | Thursday 19 February 2026 06:39:21 +0000 (0:00:01.209) 0:56:07.771 ***** 2026-02-19 06:39:49.443368 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443380 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443400 | orchestrator | 2026-02-19 06:39:49.443412 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:39:49.443422 | orchestrator | Thursday 19 February 2026 06:39:22 +0000 (0:00:01.146) 0:56:08.918 ***** 2026-02-19 06:39:49.443433 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443444 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443455 | orchestrator | 2026-02-19 06:39:49.443466 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:39:49.443477 | orchestrator | Thursday 19 February 2026 06:39:24 +0000 (0:00:01.589) 0:56:10.508 ***** 2026-02-19 06:39:49.443488 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443498 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443510 | orchestrator | 2026-02-19 06:39:49.443541 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:39:49.443553 | orchestrator | Thursday 19 February 2026 06:39:25 +0000 (0:00:01.674) 0:56:12.182 ***** 2026-02-19 06:39:49.443564 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443575 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443585 | orchestrator | 2026-02-19 06:39:49.443597 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:39:49.443608 | orchestrator | Thursday 19 February 2026 06:39:27 +0000 (0:00:01.539) 0:56:13.722 ***** 2026-02-19 06:39:49.443620 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443650 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443659 | orchestrator | 2026-02-19 06:39:49.443666 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:39:49.443672 | orchestrator | Thursday 19 February 2026 06:39:28 +0000 (0:00:01.208) 0:56:14.931 ***** 2026-02-19 06:39:49.443679 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443686 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443692 | orchestrator | 2026-02-19 06:39:49.443699 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:39:49.443706 | orchestrator | Thursday 19 February 2026 06:39:29 +0000 (0:00:01.284) 0:56:16.215 ***** 2026-02-19 06:39:49.443712 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443719 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443725 | orchestrator | 2026-02-19 06:39:49.443732 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:39:49.443739 | orchestrator | Thursday 19 February 2026 06:39:31 +0000 (0:00:01.225) 0:56:17.441 ***** 2026-02-19 06:39:49.443745 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443752 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443758 | orchestrator | 2026-02-19 06:39:49.443765 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:39:49.443772 | orchestrator | Thursday 19 February 2026 06:39:32 +0000 (0:00:01.214) 0:56:18.656 ***** 2026-02-19 06:39:49.443778 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443785 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443792 | orchestrator | 2026-02-19 06:39:49.443799 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:39:49.443805 | orchestrator | Thursday 19 February 2026 06:39:33 +0000 (0:00:01.194) 0:56:19.851 ***** 2026-02-19 06:39:49.443812 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443819 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443825 | orchestrator | 2026-02-19 06:39:49.443832 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:39:49.443839 | orchestrator | Thursday 19 February 2026 06:39:34 +0000 (0:00:01.214) 0:56:21.066 ***** 2026-02-19 06:39:49.443845 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443852 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443858 | orchestrator | 2026-02-19 06:39:49.443865 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:39:49.443872 | orchestrator | Thursday 19 February 2026 06:39:36 +0000 (0:00:01.235) 0:56:22.302 ***** 2026-02-19 06:39:49.443878 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443890 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443897 | orchestrator | 2026-02-19 06:39:49.443904 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:39:49.443910 | orchestrator | Thursday 19 February 2026 06:39:37 +0000 (0:00:01.276) 0:56:23.578 ***** 2026-02-19 06:39:49.443917 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:39:49.443924 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:39:49.443930 | orchestrator | 2026-02-19 06:39:49.443937 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:39:49.443944 | orchestrator | Thursday 19 February 2026 06:39:38 +0000 (0:00:01.224) 0:56:24.802 ***** 2026-02-19 06:39:49.443950 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443957 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443964 | orchestrator | 2026-02-19 06:39:49.443970 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:39:49.443977 | orchestrator | Thursday 19 February 2026 06:39:39 +0000 (0:00:01.209) 0:56:26.012 ***** 2026-02-19 06:39:49.443983 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.443990 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.443997 | orchestrator | 2026-02-19 06:39:49.444003 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:39:49.444015 | orchestrator | Thursday 19 February 2026 06:39:41 +0000 (0:00:01.217) 0:56:27.230 ***** 2026-02-19 06:39:49.444021 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.444028 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.444034 | orchestrator | 2026-02-19 06:39:49.444040 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:39:49.444046 | orchestrator | Thursday 19 February 2026 06:39:42 +0000 (0:00:01.226) 0:56:28.457 ***** 2026-02-19 06:39:49.444052 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.444058 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.444064 | orchestrator | 2026-02-19 06:39:49.444071 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:39:49.444077 | orchestrator | Thursday 19 February 2026 06:39:43 +0000 (0:00:01.194) 0:56:29.652 ***** 2026-02-19 06:39:49.444083 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.444089 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.444095 | orchestrator | 2026-02-19 06:39:49.444101 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:39:49.444107 | orchestrator | Thursday 19 February 2026 06:39:44 +0000 (0:00:01.186) 0:56:30.838 ***** 2026-02-19 06:39:49.444113 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.444128 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.444134 | orchestrator | 2026-02-19 06:39:49.444141 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:39:49.444147 | orchestrator | Thursday 19 February 2026 06:39:45 +0000 (0:00:01.217) 0:56:32.056 ***** 2026-02-19 06:39:49.444153 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.444159 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.444165 | orchestrator | 2026-02-19 06:39:49.444171 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:39:49.444178 | orchestrator | Thursday 19 February 2026 06:39:47 +0000 (0:00:01.242) 0:56:33.298 ***** 2026-02-19 06:39:49.444184 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.444190 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.444196 | orchestrator | 2026-02-19 06:39:49.444202 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:39:49.444208 | orchestrator | Thursday 19 February 2026 06:39:48 +0000 (0:00:01.190) 0:56:34.489 ***** 2026-02-19 06:39:49.444217 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:39:49.444226 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:39:49.444233 | orchestrator | 2026-02-19 06:39:49.444243 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:40:34.745269 | orchestrator | Thursday 19 February 2026 06:39:49 +0000 (0:00:01.166) 0:56:35.655 ***** 2026-02-19 06:40:34.745451 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.745473 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.745483 | orchestrator | 2026-02-19 06:40:34.745493 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:40:34.745596 | orchestrator | Thursday 19 February 2026 06:39:50 +0000 (0:00:01.542) 0:56:37.198 ***** 2026-02-19 06:40:34.745615 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.745630 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.745645 | orchestrator | 2026-02-19 06:40:34.745660 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:40:34.745676 | orchestrator | Thursday 19 February 2026 06:39:52 +0000 (0:00:01.223) 0:56:38.421 ***** 2026-02-19 06:40:34.745690 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.745706 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.745721 | orchestrator | 2026-02-19 06:40:34.745737 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:40:34.745752 | orchestrator | Thursday 19 February 2026 06:39:53 +0000 (0:00:01.240) 0:56:39.662 ***** 2026-02-19 06:40:34.745767 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:40:34.745784 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:40:34.745799 | orchestrator | 2026-02-19 06:40:34.745816 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:40:34.745832 | orchestrator | Thursday 19 February 2026 06:39:55 +0000 (0:00:02.156) 0:56:41.819 ***** 2026-02-19 06:40:34.745842 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:40:34.745853 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:40:34.745862 | orchestrator | 2026-02-19 06:40:34.745873 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:40:34.745883 | orchestrator | Thursday 19 February 2026 06:39:58 +0000 (0:00:02.419) 0:56:44.238 ***** 2026-02-19 06:40:34.745894 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4 2026-02-19 06:40:34.745904 | orchestrator | 2026-02-19 06:40:34.745913 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:40:34.745924 | orchestrator | Thursday 19 February 2026 06:39:59 +0000 (0:00:01.356) 0:56:45.595 ***** 2026-02-19 06:40:34.745933 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.745942 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.745951 | orchestrator | 2026-02-19 06:40:34.745960 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:40:34.745968 | orchestrator | Thursday 19 February 2026 06:40:00 +0000 (0:00:01.271) 0:56:46.867 ***** 2026-02-19 06:40:34.745977 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.745986 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.745994 | orchestrator | 2026-02-19 06:40:34.746003 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:40:34.746061 | orchestrator | Thursday 19 February 2026 06:40:01 +0000 (0:00:01.216) 0:56:48.083 ***** 2026-02-19 06:40:34.746072 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:40:34.746080 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:40:34.746089 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:40:34.746098 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:40:34.746107 | orchestrator | 2026-02-19 06:40:34.746134 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:40:34.746149 | orchestrator | Thursday 19 February 2026 06:40:03 +0000 (0:00:01.934) 0:56:50.017 ***** 2026-02-19 06:40:34.746165 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:40:34.746181 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:40:34.746196 | orchestrator | 2026-02-19 06:40:34.746210 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:40:34.746254 | orchestrator | Thursday 19 February 2026 06:40:05 +0000 (0:00:01.613) 0:56:51.630 ***** 2026-02-19 06:40:34.746270 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746282 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.746291 | orchestrator | 2026-02-19 06:40:34.746299 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:40:34.746308 | orchestrator | Thursday 19 February 2026 06:40:06 +0000 (0:00:01.236) 0:56:52.867 ***** 2026-02-19 06:40:34.746317 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746325 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.746334 | orchestrator | 2026-02-19 06:40:34.746342 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:40:34.746351 | orchestrator | Thursday 19 February 2026 06:40:08 +0000 (0:00:01.594) 0:56:54.462 ***** 2026-02-19 06:40:34.746360 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746368 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.746377 | orchestrator | 2026-02-19 06:40:34.746386 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:40:34.746394 | orchestrator | Thursday 19 February 2026 06:40:09 +0000 (0:00:01.214) 0:56:55.677 ***** 2026-02-19 06:40:34.746403 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4 2026-02-19 06:40:34.746412 | orchestrator | 2026-02-19 06:40:34.746420 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:40:34.746429 | orchestrator | Thursday 19 February 2026 06:40:10 +0000 (0:00:01.193) 0:56:56.871 ***** 2026-02-19 06:40:34.746438 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:40:34.746447 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:40:34.746456 | orchestrator | 2026-02-19 06:40:34.746464 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:40:34.746473 | orchestrator | Thursday 19 February 2026 06:40:13 +0000 (0:00:02.674) 0:56:59.545 ***** 2026-02-19 06:40:34.746482 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:40:34.746544 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:40:34.746556 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:40:34.746564 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746573 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:40:34.746582 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:40:34.746590 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:40:34.746599 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.746607 | orchestrator | 2026-02-19 06:40:34.746616 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:40:34.746625 | orchestrator | Thursday 19 February 2026 06:40:14 +0000 (0:00:01.266) 0:57:00.812 ***** 2026-02-19 06:40:34.746633 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746642 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.746650 | orchestrator | 2026-02-19 06:40:34.746659 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:40:34.746668 | orchestrator | Thursday 19 February 2026 06:40:15 +0000 (0:00:01.309) 0:57:02.121 ***** 2026-02-19 06:40:34.746676 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746685 | orchestrator | 2026-02-19 06:40:34.746693 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:40:34.746702 | orchestrator | Thursday 19 February 2026 06:40:17 +0000 (0:00:01.242) 0:57:03.364 ***** 2026-02-19 06:40:34.746710 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746719 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.746727 | orchestrator | 2026-02-19 06:40:34.746736 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:40:34.746752 | orchestrator | Thursday 19 February 2026 06:40:18 +0000 (0:00:01.279) 0:57:04.643 ***** 2026-02-19 06:40:34.746761 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746770 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.746779 | orchestrator | 2026-02-19 06:40:34.746788 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:40:34.746796 | orchestrator | Thursday 19 February 2026 06:40:19 +0000 (0:00:01.205) 0:57:05.849 ***** 2026-02-19 06:40:34.746805 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.746813 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.746822 | orchestrator | 2026-02-19 06:40:34.746830 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:40:34.746839 | orchestrator | Thursday 19 February 2026 06:40:20 +0000 (0:00:01.260) 0:57:07.110 ***** 2026-02-19 06:40:34.746847 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:40:34.746856 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:40:34.746864 | orchestrator | 2026-02-19 06:40:34.746873 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:40:34.746882 | orchestrator | Thursday 19 February 2026 06:40:23 +0000 (0:00:02.659) 0:57:09.769 ***** 2026-02-19 06:40:34.746890 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:40:34.746899 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:40:34.746924 | orchestrator | 2026-02-19 06:40:34.746955 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:40:34.746978 | orchestrator | Thursday 19 February 2026 06:40:24 +0000 (0:00:01.222) 0:57:10.992 ***** 2026-02-19 06:40:34.746993 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4 2026-02-19 06:40:34.747009 | orchestrator | 2026-02-19 06:40:34.747032 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:40:34.747046 | orchestrator | Thursday 19 February 2026 06:40:26 +0000 (0:00:01.404) 0:57:12.397 ***** 2026-02-19 06:40:34.747061 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.747076 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.747091 | orchestrator | 2026-02-19 06:40:34.747105 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:40:34.747121 | orchestrator | Thursday 19 February 2026 06:40:27 +0000 (0:00:01.231) 0:57:13.628 ***** 2026-02-19 06:40:34.747137 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.747154 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.747169 | orchestrator | 2026-02-19 06:40:34.747184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:40:34.747193 | orchestrator | Thursday 19 February 2026 06:40:28 +0000 (0:00:01.220) 0:57:14.849 ***** 2026-02-19 06:40:34.747202 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.747210 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.747219 | orchestrator | 2026-02-19 06:40:34.747227 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:40:34.747236 | orchestrator | Thursday 19 February 2026 06:40:29 +0000 (0:00:01.256) 0:57:16.105 ***** 2026-02-19 06:40:34.747244 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.747253 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.747261 | orchestrator | 2026-02-19 06:40:34.747270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:40:34.747278 | orchestrator | Thursday 19 February 2026 06:40:31 +0000 (0:00:01.232) 0:57:17.337 ***** 2026-02-19 06:40:34.747287 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.747295 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.747304 | orchestrator | 2026-02-19 06:40:34.747313 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:40:34.747321 | orchestrator | Thursday 19 February 2026 06:40:32 +0000 (0:00:01.212) 0:57:18.550 ***** 2026-02-19 06:40:34.747330 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.747339 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.747356 | orchestrator | 2026-02-19 06:40:34.747368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:40:34.747382 | orchestrator | Thursday 19 February 2026 06:40:33 +0000 (0:00:01.191) 0:57:19.741 ***** 2026-02-19 06:40:34.747396 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:40:34.747411 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:40:34.747424 | orchestrator | 2026-02-19 06:40:34.747448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:41:15.539054 | orchestrator | Thursday 19 February 2026 06:40:34 +0000 (0:00:01.217) 0:57:20.959 ***** 2026-02-19 06:41:15.539156 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.539170 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.539179 | orchestrator | 2026-02-19 06:41:15.539188 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:41:15.539196 | orchestrator | Thursday 19 February 2026 06:40:35 +0000 (0:00:01.241) 0:57:22.200 ***** 2026-02-19 06:41:15.539205 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:41:15.539213 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:41:15.539221 | orchestrator | 2026-02-19 06:41:15.539230 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:41:15.539238 | orchestrator | Thursday 19 February 2026 06:40:37 +0000 (0:00:01.251) 0:57:23.452 ***** 2026-02-19 06:41:15.539246 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4 2026-02-19 06:41:15.539254 | orchestrator | 2026-02-19 06:41:15.539262 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:41:15.539270 | orchestrator | Thursday 19 February 2026 06:40:38 +0000 (0:00:01.185) 0:57:24.637 ***** 2026-02-19 06:41:15.539278 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-19 06:41:15.539286 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-19 06:41:15.539294 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-19 06:41:15.539302 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-19 06:41:15.539310 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-19 06:41:15.539318 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-19 06:41:15.539326 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-19 06:41:15.539333 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-19 06:41:15.539341 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-19 06:41:15.539349 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-19 06:41:15.539357 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-19 06:41:15.539365 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-19 06:41:15.539372 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-19 06:41:15.539380 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-19 06:41:15.539388 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:41:15.539396 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:41:15.539404 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:41:15.539412 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:41:15.539420 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:41:15.539443 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:41:15.539451 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:41:15.539467 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:41:15.539475 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:41:15.539483 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:41:15.539568 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:41:15.539596 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:41:15.539604 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:41:15.539611 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:41:15.539618 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-19 06:41:15.539625 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-19 06:41:15.539632 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-19 06:41:15.539639 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-19 06:41:15.539646 | orchestrator | 2026-02-19 06:41:15.539654 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:41:15.539661 | orchestrator | Thursday 19 February 2026 06:40:45 +0000 (0:00:07.205) 0:57:31.843 ***** 2026-02-19 06:41:15.539668 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4 2026-02-19 06:41:15.539676 | orchestrator | 2026-02-19 06:41:15.539683 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 06:41:15.539690 | orchestrator | Thursday 19 February 2026 06:40:46 +0000 (0:00:01.290) 0:57:33.134 ***** 2026-02-19 06:41:15.539698 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:41:15.539707 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:41:15.539714 | orchestrator | 2026-02-19 06:41:15.539721 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 06:41:15.539729 | orchestrator | Thursday 19 February 2026 06:40:48 +0000 (0:00:01.625) 0:57:34.759 ***** 2026-02-19 06:41:15.539736 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:41:15.539743 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:41:15.539750 | orchestrator | 2026-02-19 06:41:15.539757 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:41:15.539779 | orchestrator | Thursday 19 February 2026 06:40:50 +0000 (0:00:02.045) 0:57:36.805 ***** 2026-02-19 06:41:15.539786 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.539794 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.539801 | orchestrator | 2026-02-19 06:41:15.539808 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:41:15.539815 | orchestrator | Thursday 19 February 2026 06:40:51 +0000 (0:00:01.248) 0:57:38.053 ***** 2026-02-19 06:41:15.539822 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.539830 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.539837 | orchestrator | 2026-02-19 06:41:15.539844 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:41:15.539851 | orchestrator | Thursday 19 February 2026 06:40:53 +0000 (0:00:01.213) 0:57:39.267 ***** 2026-02-19 06:41:15.539858 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.539865 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.539872 | orchestrator | 2026-02-19 06:41:15.539879 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:41:15.539886 | orchestrator | Thursday 19 February 2026 06:40:54 +0000 (0:00:01.511) 0:57:40.778 ***** 2026-02-19 06:41:15.539893 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.539901 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.539908 | orchestrator | 2026-02-19 06:41:15.539915 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:41:15.539922 | orchestrator | Thursday 19 February 2026 06:40:55 +0000 (0:00:01.307) 0:57:42.086 ***** 2026-02-19 06:41:15.539930 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.539937 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.539950 | orchestrator | 2026-02-19 06:41:15.539957 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:41:15.539964 | orchestrator | Thursday 19 February 2026 06:40:57 +0000 (0:00:01.241) 0:57:43.327 ***** 2026-02-19 06:41:15.539971 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.539979 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.539986 | orchestrator | 2026-02-19 06:41:15.539993 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:41:15.540000 | orchestrator | Thursday 19 February 2026 06:40:58 +0000 (0:00:01.208) 0:57:44.535 ***** 2026-02-19 06:41:15.540007 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.540014 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.540021 | orchestrator | 2026-02-19 06:41:15.540028 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:41:15.540035 | orchestrator | Thursday 19 February 2026 06:40:59 +0000 (0:00:01.223) 0:57:45.759 ***** 2026-02-19 06:41:15.540042 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.540049 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.540056 | orchestrator | 2026-02-19 06:41:15.540064 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:41:15.540071 | orchestrator | Thursday 19 February 2026 06:41:00 +0000 (0:00:01.185) 0:57:46.945 ***** 2026-02-19 06:41:15.540078 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.540085 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.540092 | orchestrator | 2026-02-19 06:41:15.540099 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:41:15.540106 | orchestrator | Thursday 19 February 2026 06:41:01 +0000 (0:00:01.226) 0:57:48.172 ***** 2026-02-19 06:41:15.540113 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.540120 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.540127 | orchestrator | 2026-02-19 06:41:15.540138 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:41:15.540146 | orchestrator | Thursday 19 February 2026 06:41:03 +0000 (0:00:01.224) 0:57:49.396 ***** 2026-02-19 06:41:15.540153 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:41:15.540160 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:41:15.540167 | orchestrator | 2026-02-19 06:41:15.540174 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:41:15.540181 | orchestrator | Thursday 19 February 2026 06:41:04 +0000 (0:00:01.218) 0:57:50.615 ***** 2026-02-19 06:41:15.540188 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:41:15.540195 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:41:15.540202 | orchestrator | 2026-02-19 06:41:15.540209 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:41:15.540216 | orchestrator | Thursday 19 February 2026 06:41:09 +0000 (0:00:04.697) 0:57:55.313 ***** 2026-02-19 06:41:15.540224 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:41:15.540231 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:41:15.540238 | orchestrator | 2026-02-19 06:41:15.540245 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:41:15.540252 | orchestrator | Thursday 19 February 2026 06:41:10 +0000 (0:00:01.267) 0:57:56.581 ***** 2026-02-19 06:41:15.540261 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-19 06:41:15.540316 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-19 06:42:03.328964 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-19 06:42:03.329080 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-19 06:42:03.329091 | orchestrator | 2026-02-19 06:42:03.329099 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:42:03.329106 | orchestrator | Thursday 19 February 2026 06:41:15 +0000 (0:00:05.174) 0:58:01.755 ***** 2026-02-19 06:42:03.329112 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329119 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:42:03.329125 | orchestrator | 2026-02-19 06:42:03.329131 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:42:03.329137 | orchestrator | Thursday 19 February 2026 06:41:16 +0000 (0:00:01.201) 0:58:02.956 ***** 2026-02-19 06:42:03.329143 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329149 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:42:03.329155 | orchestrator | 2026-02-19 06:42:03.329161 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:42:03.329169 | orchestrator | Thursday 19 February 2026 06:41:18 +0000 (0:00:01.503) 0:58:04.460 ***** 2026-02-19 06:42:03.329189 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329195 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:42:03.329201 | orchestrator | 2026-02-19 06:42:03.329207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:42:03.329213 | orchestrator | Thursday 19 February 2026 06:41:19 +0000 (0:00:01.232) 0:58:05.692 ***** 2026-02-19 06:42:03.329219 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329225 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:42:03.329231 | orchestrator | 2026-02-19 06:42:03.329237 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:42:03.329242 | orchestrator | Thursday 19 February 2026 06:41:20 +0000 (0:00:01.250) 0:58:06.943 ***** 2026-02-19 06:42:03.329248 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329254 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:42:03.329260 | orchestrator | 2026-02-19 06:42:03.329266 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:42:03.329272 | orchestrator | Thursday 19 February 2026 06:41:21 +0000 (0:00:01.255) 0:58:08.199 ***** 2026-02-19 06:42:03.329278 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.329285 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.329291 | orchestrator | 2026-02-19 06:42:03.329297 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:42:03.329316 | orchestrator | Thursday 19 February 2026 06:41:23 +0000 (0:00:01.316) 0:58:09.515 ***** 2026-02-19 06:42:03.329322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:42:03.329329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:42:03.329335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:42:03.329341 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329347 | orchestrator | 2026-02-19 06:42:03.329353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:42:03.329376 | orchestrator | Thursday 19 February 2026 06:41:24 +0000 (0:00:01.354) 0:58:10.871 ***** 2026-02-19 06:42:03.329382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:42:03.329388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:42:03.329394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:42:03.329400 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329406 | orchestrator | 2026-02-19 06:42:03.329412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:42:03.329417 | orchestrator | Thursday 19 February 2026 06:41:26 +0000 (0:00:01.550) 0:58:12.421 ***** 2026-02-19 06:42:03.329423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:42:03.329429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:42:03.329435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:42:03.329441 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329446 | orchestrator | 2026-02-19 06:42:03.329452 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:42:03.329458 | orchestrator | Thursday 19 February 2026 06:41:27 +0000 (0:00:01.592) 0:58:14.013 ***** 2026-02-19 06:42:03.329464 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.329485 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.329491 | orchestrator | 2026-02-19 06:42:03.329497 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:42:03.329503 | orchestrator | Thursday 19 February 2026 06:41:29 +0000 (0:00:01.398) 0:58:15.412 ***** 2026-02-19 06:42:03.329508 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 06:42:03.329514 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 06:42:03.329520 | orchestrator | 2026-02-19 06:42:03.329526 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:42:03.329532 | orchestrator | Thursday 19 February 2026 06:41:30 +0000 (0:00:01.461) 0:58:16.873 ***** 2026-02-19 06:42:03.329538 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.329544 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.329549 | orchestrator | 2026-02-19 06:42:03.329570 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-19 06:42:03.329576 | orchestrator | Thursday 19 February 2026 06:41:32 +0000 (0:00:01.835) 0:58:18.709 ***** 2026-02-19 06:42:03.329582 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329588 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:42:03.329594 | orchestrator | 2026-02-19 06:42:03.329600 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-19 06:42:03.329606 | orchestrator | Thursday 19 February 2026 06:41:33 +0000 (0:00:01.244) 0:58:19.954 ***** 2026-02-19 06:42:03.329612 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4 2026-02-19 06:42:03.329618 | orchestrator | 2026-02-19 06:42:03.329624 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-19 06:42:03.329630 | orchestrator | Thursday 19 February 2026 06:41:35 +0000 (0:00:01.441) 0:58:21.396 ***** 2026-02-19 06:42:03.329636 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-19 06:42:03.329642 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-19 06:42:03.329647 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-19 06:42:03.329653 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-19 06:42:03.329659 | orchestrator | 2026-02-19 06:42:03.329665 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-19 06:42:03.329671 | orchestrator | Thursday 19 February 2026 06:41:37 +0000 (0:00:01.984) 0:58:23.381 ***** 2026-02-19 06:42:03.329677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:42:03.329683 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 06:42:03.329693 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:42:03.329699 | orchestrator | 2026-02-19 06:42:03.329705 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:42:03.329711 | orchestrator | Thursday 19 February 2026 06:41:40 +0000 (0:00:03.246) 0:58:26.627 ***** 2026-02-19 06:42:03.329717 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-19 06:42:03.329723 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 06:42:03.329729 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.329735 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-19 06:42:03.329741 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-19 06:42:03.329746 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.329752 | orchestrator | 2026-02-19 06:42:03.329758 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-19 06:42:03.329764 | orchestrator | Thursday 19 February 2026 06:41:42 +0000 (0:00:02.102) 0:58:28.730 ***** 2026-02-19 06:42:03.329770 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.329776 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.329781 | orchestrator | 2026-02-19 06:42:03.329787 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-19 06:42:03.329793 | orchestrator | Thursday 19 February 2026 06:41:44 +0000 (0:00:01.582) 0:58:30.312 ***** 2026-02-19 06:42:03.329799 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.329805 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:42:03.329810 | orchestrator | 2026-02-19 06:42:03.329816 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-19 06:42:03.329825 | orchestrator | Thursday 19 February 2026 06:41:45 +0000 (0:00:01.213) 0:58:31.526 ***** 2026-02-19 06:42:03.329831 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4 2026-02-19 06:42:03.329838 | orchestrator | 2026-02-19 06:42:03.329843 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-19 06:42:03.329849 | orchestrator | Thursday 19 February 2026 06:41:46 +0000 (0:00:01.421) 0:58:32.947 ***** 2026-02-19 06:42:03.329855 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4 2026-02-19 06:42:03.329861 | orchestrator | 2026-02-19 06:42:03.329867 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-19 06:42:03.329873 | orchestrator | Thursday 19 February 2026 06:41:47 +0000 (0:00:01.190) 0:58:34.138 ***** 2026-02-19 06:42:03.329879 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.329884 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.329890 | orchestrator | 2026-02-19 06:42:03.329896 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-19 06:42:03.329902 | orchestrator | Thursday 19 February 2026 06:41:50 +0000 (0:00:02.106) 0:58:36.245 ***** 2026-02-19 06:42:03.329908 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.329914 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.329919 | orchestrator | 2026-02-19 06:42:03.329925 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-19 06:42:03.329931 | orchestrator | Thursday 19 February 2026 06:41:52 +0000 (0:00:02.084) 0:58:38.330 ***** 2026-02-19 06:42:03.329937 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.329943 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.329948 | orchestrator | 2026-02-19 06:42:03.329954 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-19 06:42:03.329960 | orchestrator | Thursday 19 February 2026 06:41:54 +0000 (0:00:02.426) 0:58:40.757 ***** 2026-02-19 06:42:03.329966 | orchestrator | changed: [testbed-node-3] 2026-02-19 06:42:03.329972 | orchestrator | changed: [testbed-node-4] 2026-02-19 06:42:03.329978 | orchestrator | 2026-02-19 06:42:03.329983 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-19 06:42:03.329989 | orchestrator | Thursday 19 February 2026 06:41:58 +0000 (0:00:03.512) 0:58:44.269 ***** 2026-02-19 06:42:03.330001 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:03.330007 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:42:03.330013 | orchestrator | 2026-02-19 06:42:03.330063 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-19 06:42:03.330070 | orchestrator | Thursday 19 February 2026 06:41:59 +0000 (0:00:01.760) 0:58:46.030 ***** 2026-02-19 06:42:03.330075 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:03.330086 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:42:25.848713 | orchestrator | 2026-02-19 06:42:25.848852 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-19 06:42:25.848875 | orchestrator | 2026-02-19 06:42:25.848892 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:42:25.848907 | orchestrator | Thursday 19 February 2026 06:42:03 +0000 (0:00:03.506) 0:58:49.536 ***** 2026-02-19 06:42:25.848921 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-19 06:42:25.848936 | orchestrator | 2026-02-19 06:42:25.848950 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:42:25.848965 | orchestrator | Thursday 19 February 2026 06:42:04 +0000 (0:00:01.102) 0:58:50.639 ***** 2026-02-19 06:42:25.848980 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:25.848995 | orchestrator | 2026-02-19 06:42:25.849009 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:42:25.849024 | orchestrator | Thursday 19 February 2026 06:42:05 +0000 (0:00:01.460) 0:58:52.099 ***** 2026-02-19 06:42:25.849039 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:25.849053 | orchestrator | 2026-02-19 06:42:25.849068 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:42:25.849084 | orchestrator | Thursday 19 February 2026 06:42:07 +0000 (0:00:01.146) 0:58:53.246 ***** 2026-02-19 06:42:25.849099 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:25.849114 | orchestrator | 2026-02-19 06:42:25.849129 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:42:25.849144 | orchestrator | Thursday 19 February 2026 06:42:08 +0000 (0:00:01.438) 0:58:54.684 ***** 2026-02-19 06:42:25.849160 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:25.849175 | orchestrator | 2026-02-19 06:42:25.849190 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:42:25.849205 | orchestrator | Thursday 19 February 2026 06:42:09 +0000 (0:00:01.142) 0:58:55.827 ***** 2026-02-19 06:42:25.849220 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:25.849236 | orchestrator | 2026-02-19 06:42:25.849253 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:42:25.849268 | orchestrator | Thursday 19 February 2026 06:42:10 +0000 (0:00:01.142) 0:58:56.969 ***** 2026-02-19 06:42:25.849284 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:25.849299 | orchestrator | 2026-02-19 06:42:25.849312 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:42:25.849327 | orchestrator | Thursday 19 February 2026 06:42:11 +0000 (0:00:01.128) 0:58:58.097 ***** 2026-02-19 06:42:25.849340 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:25.849355 | orchestrator | 2026-02-19 06:42:25.849371 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:42:25.849386 | orchestrator | Thursday 19 February 2026 06:42:13 +0000 (0:00:01.166) 0:58:59.264 ***** 2026-02-19 06:42:25.849398 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:25.849407 | orchestrator | 2026-02-19 06:42:25.849416 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:42:25.849425 | orchestrator | Thursday 19 February 2026 06:42:14 +0000 (0:00:01.099) 0:59:00.364 ***** 2026-02-19 06:42:25.849433 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:42:25.849442 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:42:25.849497 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:42:25.849542 | orchestrator | 2026-02-19 06:42:25.849551 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:42:25.849560 | orchestrator | Thursday 19 February 2026 06:42:15 +0000 (0:00:01.676) 0:59:02.040 ***** 2026-02-19 06:42:25.849568 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:25.849577 | orchestrator | 2026-02-19 06:42:25.849586 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:42:25.849594 | orchestrator | Thursday 19 February 2026 06:42:17 +0000 (0:00:01.233) 0:59:03.274 ***** 2026-02-19 06:42:25.849603 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:42:25.849611 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:42:25.849619 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:42:25.849629 | orchestrator | 2026-02-19 06:42:25.849638 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:42:25.849646 | orchestrator | Thursday 19 February 2026 06:42:19 +0000 (0:00:02.919) 0:59:06.193 ***** 2026-02-19 06:42:25.849655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 06:42:25.849664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 06:42:25.849672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 06:42:25.849681 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:25.849690 | orchestrator | 2026-02-19 06:42:25.849698 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:42:25.849707 | orchestrator | Thursday 19 February 2026 06:42:21 +0000 (0:00:01.435) 0:59:07.629 ***** 2026-02-19 06:42:25.849718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:42:25.849730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:42:25.849760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:42:25.849770 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:25.849778 | orchestrator | 2026-02-19 06:42:25.849787 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:42:25.849796 | orchestrator | Thursday 19 February 2026 06:42:23 +0000 (0:00:01.952) 0:59:09.582 ***** 2026-02-19 06:42:25.849807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:25.849817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:25.849827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:25.849843 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:25.849852 | orchestrator | 2026-02-19 06:42:25.849861 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:42:25.849869 | orchestrator | Thursday 19 February 2026 06:42:24 +0000 (0:00:01.184) 0:59:10.766 ***** 2026-02-19 06:42:25.849886 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:42:17.655415', 'end': '2026-02-19 06:42:17.707007', 'delta': '0:00:00.051592', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:42:25.849899 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:42:18.228813', 'end': '2026-02-19 06:42:18.266724', 'delta': '0:00:00.037911', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:42:25.849908 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:42:18.796542', 'end': '2026-02-19 06:42:18.843371', 'delta': '0:00:00.046829', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:42:25.849917 | orchestrator | 2026-02-19 06:42:25.849932 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:42:43.528798 | orchestrator | Thursday 19 February 2026 06:42:25 +0000 (0:00:01.293) 0:59:12.059 ***** 2026-02-19 06:42:43.528911 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:43.528928 | orchestrator | 2026-02-19 06:42:43.528942 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:42:43.528953 | orchestrator | Thursday 19 February 2026 06:42:27 +0000 (0:00:01.263) 0:59:13.323 ***** 2026-02-19 06:42:43.528965 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:43.528977 | orchestrator | 2026-02-19 06:42:43.528988 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:42:43.529000 | orchestrator | Thursday 19 February 2026 06:42:28 +0000 (0:00:01.675) 0:59:14.998 ***** 2026-02-19 06:42:43.529010 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:43.529021 | orchestrator | 2026-02-19 06:42:43.529032 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:42:43.529043 | orchestrator | Thursday 19 February 2026 06:42:29 +0000 (0:00:01.133) 0:59:16.131 ***** 2026-02-19 06:42:43.529054 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:42:43.529091 | orchestrator | 2026-02-19 06:42:43.529102 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:42:43.529113 | orchestrator | Thursday 19 February 2026 06:42:31 +0000 (0:00:02.026) 0:59:18.158 ***** 2026-02-19 06:42:43.529124 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:43.529135 | orchestrator | 2026-02-19 06:42:43.529145 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:42:43.529156 | orchestrator | Thursday 19 February 2026 06:42:33 +0000 (0:00:01.120) 0:59:19.278 ***** 2026-02-19 06:42:43.529167 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:43.529178 | orchestrator | 2026-02-19 06:42:43.529189 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:42:43.529200 | orchestrator | Thursday 19 February 2026 06:42:34 +0000 (0:00:01.105) 0:59:20.383 ***** 2026-02-19 06:42:43.529211 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:43.529221 | orchestrator | 2026-02-19 06:42:43.529232 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:42:43.529243 | orchestrator | Thursday 19 February 2026 06:42:35 +0000 (0:00:01.237) 0:59:21.621 ***** 2026-02-19 06:42:43.529253 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:43.529264 | orchestrator | 2026-02-19 06:42:43.529274 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:42:43.529285 | orchestrator | Thursday 19 February 2026 06:42:36 +0000 (0:00:01.158) 0:59:22.779 ***** 2026-02-19 06:42:43.529296 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:43.529306 | orchestrator | 2026-02-19 06:42:43.529316 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:42:43.529327 | orchestrator | Thursday 19 February 2026 06:42:37 +0000 (0:00:01.109) 0:59:23.889 ***** 2026-02-19 06:42:43.529337 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:43.529348 | orchestrator | 2026-02-19 06:42:43.529359 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:42:43.529369 | orchestrator | Thursday 19 February 2026 06:42:38 +0000 (0:00:01.150) 0:59:25.040 ***** 2026-02-19 06:42:43.529380 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:43.529390 | orchestrator | 2026-02-19 06:42:43.529414 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:42:43.529425 | orchestrator | Thursday 19 February 2026 06:42:39 +0000 (0:00:01.091) 0:59:26.132 ***** 2026-02-19 06:42:43.529435 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:43.529446 | orchestrator | 2026-02-19 06:42:43.529484 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:42:43.529496 | orchestrator | Thursday 19 February 2026 06:42:41 +0000 (0:00:01.162) 0:59:27.294 ***** 2026-02-19 06:42:43.529507 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:43.529518 | orchestrator | 2026-02-19 06:42:43.529529 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:42:43.529540 | orchestrator | Thursday 19 February 2026 06:42:42 +0000 (0:00:01.100) 0:59:28.394 ***** 2026-02-19 06:42:43.529551 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:42:43.529561 | orchestrator | 2026-02-19 06:42:43.529572 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:42:43.529583 | orchestrator | Thursday 19 February 2026 06:42:43 +0000 (0:00:01.136) 0:59:29.531 ***** 2026-02-19 06:42:43.529596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:42:43.529612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'uuids': ['76bd5aba-0bb7-430d-953d-ee2f2591c83e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV']}})  2026-02-19 06:42:43.529653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50533a39', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:42:43.529667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159']}})  2026-02-19 06:42:43.529680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:42:43.529698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:42:43.529710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:42:43.529722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:42:43.529734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl', 'dm-uuid-CRYPT-LUKS2-96c3bdbb8dfb4f8d89601607ffc96021-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:42:43.529760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:42:44.874968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'uuids': ['96c3bdbb-8dfb-4f8d-8960-1607ffc96021'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl']}})  2026-02-19 06:42:44.875059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542']}})  2026-02-19 06:42:44.875072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:42:44.875102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23a82e55', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:42:44.875146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:42:44.875157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:42:44.875166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV', 'dm-uuid-CRYPT-LUKS2-76bd5aba0bb7430d953dee2f2591c83e-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:42:44.875176 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:42:44.875186 | orchestrator | 2026-02-19 06:42:44.875195 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:42:44.875204 | orchestrator | Thursday 19 February 2026 06:42:44 +0000 (0:00:01.334) 0:59:30.865 ***** 2026-02-19 06:42:44.875218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:44.875228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542', 'dm-uuid-LVM-lX34uhB8tmDTkL93DczNXv6QbAw0ysjKmdjNAgdMohU9ZcAXcHNfClcWYQxdmajV'], 'uuids': ['76bd5aba-0bb7-430d-953d-ee2f2591c83e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:44.875243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417', 'scsi-SQEMU_QEMU_HARDDISK_50533a39-fac2-4c6c-8c30-88a176048417'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50533a39', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:44.875258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-he7JRo-1c5L-pX5O-Be3A-VFvn-vFA2-R1K8r6', 'scsi-0QEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e', 'scsi-SQEMU_QEMU_HARDDISK_c337844b-d29f-48f9-b97b-1b04477f979e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.082740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.082929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.082953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.083035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.083050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl', 'dm-uuid-CRYPT-LUKS2-96c3bdbb8dfb4f8d89601607ffc96021-pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.083070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.083116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc132c82--2da4--526a--8d14--ac4e81fe1159-osd--block--dc132c82--2da4--526a--8d14--ac4e81fe1159', 'dm-uuid-LVM-woOiLPc2MZX9tMqNu9mJ52M00GUnNLJGpmysyPKim6lEMTRsO9IDguylIzFZfnRl'], 'uuids': ['96c3bdbb-8dfb-4f8d-8960-1607ffc96021'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c337844b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pmysyP-Kim6-lEMT-RsO9-IDgu-ylIz-FZfnRl']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.083145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qeKANd-btTr-kyqx-ZYbg-qz1F-HqnA-ll4bBH', 'scsi-0QEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8', 'scsi-SQEMU_QEMU_HARDDISK_c1412cfc-917e-4010-87bd-d14c29c1eff8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c1412cfc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--900578fb--6201--5328--bc2d--5e3d92afe542-osd--block--900578fb--6201--5328--bc2d--5e3d92afe542']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.083184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:42:46.083219 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23a82e55', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1', 'scsi-SQEMU_QEMU_HARDDISK_23a82e55-09a4-48a2-8455-a56aa9578cd9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:43:13.260910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:43:13.261151 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:43:13.261219 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV', 'dm-uuid-CRYPT-LUKS2-76bd5aba0bb7430d953dee2f2591c83e-mdjNAg-dMoh-U9Zc-AXcH-NfCl-cWYQ-xdmajV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:43:13.261245 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.261267 | orchestrator | 2026-02-19 06:43:13.261288 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:43:13.261307 | orchestrator | Thursday 19 February 2026 06:42:46 +0000 (0:00:01.437) 0:59:32.303 ***** 2026-02-19 06:43:13.261326 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:43:13.261345 | orchestrator | 2026-02-19 06:43:13.261362 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:43:13.261381 | orchestrator | Thursday 19 February 2026 06:42:47 +0000 (0:00:01.434) 0:59:33.737 ***** 2026-02-19 06:43:13.261398 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:43:13.261416 | orchestrator | 2026-02-19 06:43:13.261435 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:43:13.261485 | orchestrator | Thursday 19 February 2026 06:42:48 +0000 (0:00:01.135) 0:59:34.873 ***** 2026-02-19 06:43:13.261503 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:43:13.261522 | orchestrator | 2026-02-19 06:43:13.261541 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:43:13.261560 | orchestrator | Thursday 19 February 2026 06:42:50 +0000 (0:00:01.455) 0:59:36.329 ***** 2026-02-19 06:43:13.261578 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.261596 | orchestrator | 2026-02-19 06:43:13.261614 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:43:13.261632 | orchestrator | Thursday 19 February 2026 06:42:51 +0000 (0:00:01.137) 0:59:37.467 ***** 2026-02-19 06:43:13.261650 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.261667 | orchestrator | 2026-02-19 06:43:13.261686 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:43:13.261704 | orchestrator | Thursday 19 February 2026 06:42:52 +0000 (0:00:01.231) 0:59:38.698 ***** 2026-02-19 06:43:13.261722 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.261741 | orchestrator | 2026-02-19 06:43:13.261760 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:43:13.261779 | orchestrator | Thursday 19 February 2026 06:42:53 +0000 (0:00:01.142) 0:59:39.840 ***** 2026-02-19 06:43:13.261797 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-19 06:43:13.261817 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-19 06:43:13.261829 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-19 06:43:13.261839 | orchestrator | 2026-02-19 06:43:13.261848 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:43:13.261858 | orchestrator | Thursday 19 February 2026 06:42:55 +0000 (0:00:01.917) 0:59:41.758 ***** 2026-02-19 06:43:13.261868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-19 06:43:13.261878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-19 06:43:13.261888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-19 06:43:13.261909 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.261919 | orchestrator | 2026-02-19 06:43:13.261928 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:43:13.261938 | orchestrator | Thursday 19 February 2026 06:42:56 +0000 (0:00:01.194) 0:59:42.953 ***** 2026-02-19 06:43:13.261968 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-19 06:43:13.261979 | orchestrator | 2026-02-19 06:43:13.261989 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:43:13.262008 | orchestrator | Thursday 19 February 2026 06:42:57 +0000 (0:00:01.126) 0:59:44.079 ***** 2026-02-19 06:43:13.262104 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.262125 | orchestrator | 2026-02-19 06:43:13.262141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:43:13.262157 | orchestrator | Thursday 19 February 2026 06:42:59 +0000 (0:00:01.153) 0:59:45.233 ***** 2026-02-19 06:43:13.262174 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.262190 | orchestrator | 2026-02-19 06:43:13.262218 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:43:13.262229 | orchestrator | Thursday 19 February 2026 06:43:00 +0000 (0:00:01.135) 0:59:46.368 ***** 2026-02-19 06:43:13.262239 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.262248 | orchestrator | 2026-02-19 06:43:13.262258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:43:13.262267 | orchestrator | Thursday 19 February 2026 06:43:01 +0000 (0:00:01.010) 0:59:47.379 ***** 2026-02-19 06:43:13.262277 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:43:13.262286 | orchestrator | 2026-02-19 06:43:13.262296 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:43:13.262305 | orchestrator | Thursday 19 February 2026 06:43:02 +0000 (0:00:01.168) 0:59:48.547 ***** 2026-02-19 06:43:13.262315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:43:13.262324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:43:13.262334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:43:13.262343 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.262353 | orchestrator | 2026-02-19 06:43:13.262362 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:43:13.262372 | orchestrator | Thursday 19 February 2026 06:43:03 +0000 (0:00:01.201) 0:59:49.749 ***** 2026-02-19 06:43:13.262381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:43:13.262391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:43:13.262400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:43:13.262410 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.262419 | orchestrator | 2026-02-19 06:43:13.262429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:43:13.262438 | orchestrator | Thursday 19 February 2026 06:43:04 +0000 (0:00:01.312) 0:59:51.062 ***** 2026-02-19 06:43:13.262467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:43:13.262477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:43:13.262486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:43:13.262495 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:43:13.262505 | orchestrator | 2026-02-19 06:43:13.262515 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:43:13.262524 | orchestrator | Thursday 19 February 2026 06:43:06 +0000 (0:00:01.352) 0:59:52.414 ***** 2026-02-19 06:43:13.262534 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:43:13.262543 | orchestrator | 2026-02-19 06:43:13.262553 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:43:13.262562 | orchestrator | Thursday 19 February 2026 06:43:07 +0000 (0:00:01.129) 0:59:53.544 ***** 2026-02-19 06:43:13.262586 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 06:43:13.262596 | orchestrator | 2026-02-19 06:43:13.262606 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:43:13.262625 | orchestrator | Thursday 19 February 2026 06:43:08 +0000 (0:00:01.269) 0:59:54.813 ***** 2026-02-19 06:43:13.262635 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:43:13.262645 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:43:13.262654 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:43:13.262664 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 06:43:13.262673 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:43:13.262683 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:43:13.262693 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:43:13.262703 | orchestrator | 2026-02-19 06:43:13.262712 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:43:13.262722 | orchestrator | Thursday 19 February 2026 06:43:10 +0000 (0:00:02.137) 0:59:56.951 ***** 2026-02-19 06:43:13.262731 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:43:13.262741 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:43:13.262750 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:43:13.262760 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-19 06:43:13.262770 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:43:13.262779 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:43:13.262788 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:43:13.262798 | orchestrator | 2026-02-19 06:43:13.262819 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-19 06:44:05.642082 | orchestrator | Thursday 19 February 2026 06:43:13 +0000 (0:00:02.516) 0:59:59.468 ***** 2026-02-19 06:44:05.642325 | orchestrator | changed: [testbed-node-3] 2026-02-19 06:44:05.642360 | orchestrator | 2026-02-19 06:44:05.642382 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-19 06:44:05.642397 | orchestrator | Thursday 19 February 2026 06:43:15 +0000 (0:00:02.359) 1:00:01.827 ***** 2026-02-19 06:44:05.642411 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:44:05.642425 | orchestrator | 2026-02-19 06:44:05.642470 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-19 06:44:05.642518 | orchestrator | Thursday 19 February 2026 06:43:18 +0000 (0:00:03.002) 1:00:04.830 ***** 2026-02-19 06:44:05.642533 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:44:05.642546 | orchestrator | 2026-02-19 06:44:05.642560 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:44:05.642573 | orchestrator | Thursday 19 February 2026 06:43:20 +0000 (0:00:02.364) 1:00:07.194 ***** 2026-02-19 06:44:05.642593 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-19 06:44:05.642621 | orchestrator | 2026-02-19 06:44:05.642642 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:44:05.642659 | orchestrator | Thursday 19 February 2026 06:43:22 +0000 (0:00:01.111) 1:00:08.306 ***** 2026-02-19 06:44:05.642677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-19 06:44:05.642782 | orchestrator | 2026-02-19 06:44:05.642806 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:44:05.642823 | orchestrator | Thursday 19 February 2026 06:43:23 +0000 (0:00:01.133) 1:00:09.440 ***** 2026-02-19 06:44:05.642840 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.642859 | orchestrator | 2026-02-19 06:44:05.642877 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:44:05.642897 | orchestrator | Thursday 19 February 2026 06:43:24 +0000 (0:00:01.111) 1:00:10.551 ***** 2026-02-19 06:44:05.642917 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.642936 | orchestrator | 2026-02-19 06:44:05.642956 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:44:05.642972 | orchestrator | Thursday 19 February 2026 06:43:25 +0000 (0:00:01.568) 1:00:12.119 ***** 2026-02-19 06:44:05.642983 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.642994 | orchestrator | 2026-02-19 06:44:05.643005 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:44:05.643016 | orchestrator | Thursday 19 February 2026 06:43:27 +0000 (0:00:01.520) 1:00:13.640 ***** 2026-02-19 06:44:05.643027 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.643038 | orchestrator | 2026-02-19 06:44:05.643048 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:44:05.643059 | orchestrator | Thursday 19 February 2026 06:43:28 +0000 (0:00:01.518) 1:00:15.159 ***** 2026-02-19 06:44:05.643070 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643081 | orchestrator | 2026-02-19 06:44:05.643092 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:44:05.643102 | orchestrator | Thursday 19 February 2026 06:43:30 +0000 (0:00:01.111) 1:00:16.270 ***** 2026-02-19 06:44:05.643113 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643124 | orchestrator | 2026-02-19 06:44:05.643134 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:44:05.643145 | orchestrator | Thursday 19 February 2026 06:43:31 +0000 (0:00:01.130) 1:00:17.401 ***** 2026-02-19 06:44:05.643156 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643167 | orchestrator | 2026-02-19 06:44:05.643177 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:44:05.643188 | orchestrator | Thursday 19 February 2026 06:43:32 +0000 (0:00:01.092) 1:00:18.493 ***** 2026-02-19 06:44:05.643199 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.643210 | orchestrator | 2026-02-19 06:44:05.643221 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:44:05.643232 | orchestrator | Thursday 19 February 2026 06:43:33 +0000 (0:00:01.517) 1:00:20.011 ***** 2026-02-19 06:44:05.643242 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.643253 | orchestrator | 2026-02-19 06:44:05.643264 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:44:05.643275 | orchestrator | Thursday 19 February 2026 06:43:35 +0000 (0:00:01.619) 1:00:21.631 ***** 2026-02-19 06:44:05.643285 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643296 | orchestrator | 2026-02-19 06:44:05.643307 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:44:05.643318 | orchestrator | Thursday 19 February 2026 06:43:36 +0000 (0:00:01.104) 1:00:22.735 ***** 2026-02-19 06:44:05.643329 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643339 | orchestrator | 2026-02-19 06:44:05.643351 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:44:05.643361 | orchestrator | Thursday 19 February 2026 06:43:37 +0000 (0:00:01.114) 1:00:23.850 ***** 2026-02-19 06:44:05.643372 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.643383 | orchestrator | 2026-02-19 06:44:05.643394 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:44:05.643404 | orchestrator | Thursday 19 February 2026 06:43:38 +0000 (0:00:01.144) 1:00:24.995 ***** 2026-02-19 06:44:05.643415 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.643513 | orchestrator | 2026-02-19 06:44:05.643527 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:44:05.643538 | orchestrator | Thursday 19 February 2026 06:43:39 +0000 (0:00:01.141) 1:00:26.136 ***** 2026-02-19 06:44:05.643549 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.643560 | orchestrator | 2026-02-19 06:44:05.643596 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:44:05.643607 | orchestrator | Thursday 19 February 2026 06:43:41 +0000 (0:00:01.149) 1:00:27.286 ***** 2026-02-19 06:44:05.643618 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643629 | orchestrator | 2026-02-19 06:44:05.643640 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:44:05.643651 | orchestrator | Thursday 19 February 2026 06:43:42 +0000 (0:00:01.137) 1:00:28.424 ***** 2026-02-19 06:44:05.643662 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643673 | orchestrator | 2026-02-19 06:44:05.643684 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:44:05.643695 | orchestrator | Thursday 19 February 2026 06:43:43 +0000 (0:00:01.157) 1:00:29.581 ***** 2026-02-19 06:44:05.643705 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643716 | orchestrator | 2026-02-19 06:44:05.643735 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:44:05.643747 | orchestrator | Thursday 19 February 2026 06:43:44 +0000 (0:00:01.092) 1:00:30.674 ***** 2026-02-19 06:44:05.643758 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.643769 | orchestrator | 2026-02-19 06:44:05.643780 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:44:05.643791 | orchestrator | Thursday 19 February 2026 06:43:45 +0000 (0:00:01.126) 1:00:31.800 ***** 2026-02-19 06:44:05.643802 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.643813 | orchestrator | 2026-02-19 06:44:05.643824 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:44:05.643834 | orchestrator | Thursday 19 February 2026 06:43:46 +0000 (0:00:01.128) 1:00:32.929 ***** 2026-02-19 06:44:05.643845 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643856 | orchestrator | 2026-02-19 06:44:05.643867 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:44:05.643878 | orchestrator | Thursday 19 February 2026 06:43:47 +0000 (0:00:01.112) 1:00:34.042 ***** 2026-02-19 06:44:05.643888 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643899 | orchestrator | 2026-02-19 06:44:05.643910 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:44:05.643919 | orchestrator | Thursday 19 February 2026 06:43:48 +0000 (0:00:01.148) 1:00:35.190 ***** 2026-02-19 06:44:05.643929 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643939 | orchestrator | 2026-02-19 06:44:05.643948 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:44:05.643958 | orchestrator | Thursday 19 February 2026 06:43:50 +0000 (0:00:01.127) 1:00:36.318 ***** 2026-02-19 06:44:05.643967 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.643977 | orchestrator | 2026-02-19 06:44:05.643987 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:44:05.643996 | orchestrator | Thursday 19 February 2026 06:43:51 +0000 (0:00:01.133) 1:00:37.452 ***** 2026-02-19 06:44:05.644006 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.644016 | orchestrator | 2026-02-19 06:44:05.644025 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:44:05.644035 | orchestrator | Thursday 19 February 2026 06:43:52 +0000 (0:00:01.142) 1:00:38.594 ***** 2026-02-19 06:44:05.644045 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.644054 | orchestrator | 2026-02-19 06:44:05.644064 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:44:05.644073 | orchestrator | Thursday 19 February 2026 06:43:53 +0000 (0:00:01.151) 1:00:39.745 ***** 2026-02-19 06:44:05.644083 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.644117 | orchestrator | 2026-02-19 06:44:05.644128 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:44:05.644139 | orchestrator | Thursday 19 February 2026 06:43:54 +0000 (0:00:01.109) 1:00:40.855 ***** 2026-02-19 06:44:05.644148 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.644158 | orchestrator | 2026-02-19 06:44:05.644168 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:44:05.644178 | orchestrator | Thursday 19 February 2026 06:43:55 +0000 (0:00:01.149) 1:00:42.004 ***** 2026-02-19 06:44:05.644187 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.644197 | orchestrator | 2026-02-19 06:44:05.644207 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:44:05.644216 | orchestrator | Thursday 19 February 2026 06:43:56 +0000 (0:00:01.137) 1:00:43.142 ***** 2026-02-19 06:44:05.644226 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.644236 | orchestrator | 2026-02-19 06:44:05.644245 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:44:05.644255 | orchestrator | Thursday 19 February 2026 06:43:58 +0000 (0:00:01.112) 1:00:44.255 ***** 2026-02-19 06:44:05.644265 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.644274 | orchestrator | 2026-02-19 06:44:05.644284 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:44:05.644293 | orchestrator | Thursday 19 February 2026 06:43:59 +0000 (0:00:01.098) 1:00:45.354 ***** 2026-02-19 06:44:05.644303 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:05.644313 | orchestrator | 2026-02-19 06:44:05.644322 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:44:05.644332 | orchestrator | Thursday 19 February 2026 06:44:00 +0000 (0:00:01.087) 1:00:46.441 ***** 2026-02-19 06:44:05.644342 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.644351 | orchestrator | 2026-02-19 06:44:05.644361 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:44:05.644371 | orchestrator | Thursday 19 February 2026 06:44:02 +0000 (0:00:01.899) 1:00:48.341 ***** 2026-02-19 06:44:05.644380 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:05.644390 | orchestrator | 2026-02-19 06:44:05.644400 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:44:05.644409 | orchestrator | Thursday 19 February 2026 06:44:04 +0000 (0:00:02.251) 1:00:50.593 ***** 2026-02-19 06:44:05.644419 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-19 06:44:05.644443 | orchestrator | 2026-02-19 06:44:05.644453 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:44:05.644469 | orchestrator | Thursday 19 February 2026 06:44:05 +0000 (0:00:01.261) 1:00:51.854 ***** 2026-02-19 06:44:51.071066 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071155 | orchestrator | 2026-02-19 06:44:51.071164 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:44:51.071171 | orchestrator | Thursday 19 February 2026 06:44:06 +0000 (0:00:01.116) 1:00:52.971 ***** 2026-02-19 06:44:51.071176 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071182 | orchestrator | 2026-02-19 06:44:51.071187 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:44:51.071192 | orchestrator | Thursday 19 February 2026 06:44:07 +0000 (0:00:01.100) 1:00:54.072 ***** 2026-02-19 06:44:51.071198 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:44:51.071215 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:44:51.071221 | orchestrator | 2026-02-19 06:44:51.071226 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:44:51.071231 | orchestrator | Thursday 19 February 2026 06:44:09 +0000 (0:00:01.775) 1:00:55.847 ***** 2026-02-19 06:44:51.071235 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:51.071242 | orchestrator | 2026-02-19 06:44:51.071265 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:44:51.071270 | orchestrator | Thursday 19 February 2026 06:44:11 +0000 (0:00:01.505) 1:00:57.353 ***** 2026-02-19 06:44:51.071275 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071280 | orchestrator | 2026-02-19 06:44:51.071285 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:44:51.071290 | orchestrator | Thursday 19 February 2026 06:44:12 +0000 (0:00:01.140) 1:00:58.494 ***** 2026-02-19 06:44:51.071295 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071299 | orchestrator | 2026-02-19 06:44:51.071304 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:44:51.071309 | orchestrator | Thursday 19 February 2026 06:44:13 +0000 (0:00:01.167) 1:00:59.661 ***** 2026-02-19 06:44:51.071314 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071318 | orchestrator | 2026-02-19 06:44:51.071323 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:44:51.071328 | orchestrator | Thursday 19 February 2026 06:44:14 +0000 (0:00:01.100) 1:01:00.762 ***** 2026-02-19 06:44:51.071333 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-19 06:44:51.071339 | orchestrator | 2026-02-19 06:44:51.071344 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:44:51.071348 | orchestrator | Thursday 19 February 2026 06:44:15 +0000 (0:00:01.104) 1:01:01.867 ***** 2026-02-19 06:44:51.071353 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:51.071358 | orchestrator | 2026-02-19 06:44:51.071363 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:44:51.071368 | orchestrator | Thursday 19 February 2026 06:44:17 +0000 (0:00:01.667) 1:01:03.534 ***** 2026-02-19 06:44:51.071373 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:44:51.071378 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:44:51.071382 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:44:51.071387 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071392 | orchestrator | 2026-02-19 06:44:51.071397 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:44:51.071401 | orchestrator | Thursday 19 February 2026 06:44:18 +0000 (0:00:01.132) 1:01:04.667 ***** 2026-02-19 06:44:51.071406 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071411 | orchestrator | 2026-02-19 06:44:51.071416 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:44:51.071496 | orchestrator | Thursday 19 February 2026 06:44:19 +0000 (0:00:01.111) 1:01:05.779 ***** 2026-02-19 06:44:51.071501 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071506 | orchestrator | 2026-02-19 06:44:51.071511 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:44:51.071516 | orchestrator | Thursday 19 February 2026 06:44:20 +0000 (0:00:01.184) 1:01:06.963 ***** 2026-02-19 06:44:51.071521 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071525 | orchestrator | 2026-02-19 06:44:51.071530 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:44:51.071535 | orchestrator | Thursday 19 February 2026 06:44:21 +0000 (0:00:01.163) 1:01:08.127 ***** 2026-02-19 06:44:51.071540 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071544 | orchestrator | 2026-02-19 06:44:51.071549 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:44:51.071554 | orchestrator | Thursday 19 February 2026 06:44:23 +0000 (0:00:01.130) 1:01:09.258 ***** 2026-02-19 06:44:51.071559 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071564 | orchestrator | 2026-02-19 06:44:51.071568 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:44:51.071573 | orchestrator | Thursday 19 February 2026 06:44:24 +0000 (0:00:01.162) 1:01:10.420 ***** 2026-02-19 06:44:51.071619 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:51.071628 | orchestrator | 2026-02-19 06:44:51.071636 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:44:51.071645 | orchestrator | Thursday 19 February 2026 06:44:26 +0000 (0:00:02.457) 1:01:12.877 ***** 2026-02-19 06:44:51.071652 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:51.071661 | orchestrator | 2026-02-19 06:44:51.071668 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:44:51.071676 | orchestrator | Thursday 19 February 2026 06:44:27 +0000 (0:00:01.116) 1:01:13.993 ***** 2026-02-19 06:44:51.071683 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-19 06:44:51.071691 | orchestrator | 2026-02-19 06:44:51.071699 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:44:51.071724 | orchestrator | Thursday 19 February 2026 06:44:28 +0000 (0:00:01.130) 1:01:15.124 ***** 2026-02-19 06:44:51.071734 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071742 | orchestrator | 2026-02-19 06:44:51.071750 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:44:51.071758 | orchestrator | Thursday 19 February 2026 06:44:29 +0000 (0:00:01.072) 1:01:16.197 ***** 2026-02-19 06:44:51.071767 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071775 | orchestrator | 2026-02-19 06:44:51.071784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:44:51.071792 | orchestrator | Thursday 19 February 2026 06:44:30 +0000 (0:00:00.927) 1:01:17.124 ***** 2026-02-19 06:44:51.071801 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071809 | orchestrator | 2026-02-19 06:44:51.071840 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:44:51.071850 | orchestrator | Thursday 19 February 2026 06:44:32 +0000 (0:00:01.152) 1:01:18.277 ***** 2026-02-19 06:44:51.071855 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071860 | orchestrator | 2026-02-19 06:44:51.071865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:44:51.071870 | orchestrator | Thursday 19 February 2026 06:44:33 +0000 (0:00:01.086) 1:01:19.364 ***** 2026-02-19 06:44:51.071875 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071879 | orchestrator | 2026-02-19 06:44:51.071884 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:44:51.071889 | orchestrator | Thursday 19 February 2026 06:44:34 +0000 (0:00:01.098) 1:01:20.463 ***** 2026-02-19 06:44:51.071894 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071898 | orchestrator | 2026-02-19 06:44:51.071903 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:44:51.071908 | orchestrator | Thursday 19 February 2026 06:44:35 +0000 (0:00:01.135) 1:01:21.599 ***** 2026-02-19 06:44:51.071913 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071917 | orchestrator | 2026-02-19 06:44:51.071922 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:44:51.071927 | orchestrator | Thursday 19 February 2026 06:44:36 +0000 (0:00:01.102) 1:01:22.701 ***** 2026-02-19 06:44:51.071932 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:44:51.071937 | orchestrator | 2026-02-19 06:44:51.071941 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:44:51.071946 | orchestrator | Thursday 19 February 2026 06:44:37 +0000 (0:00:01.105) 1:01:23.807 ***** 2026-02-19 06:44:51.071951 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:44:51.071959 | orchestrator | 2026-02-19 06:44:51.071967 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:44:51.071974 | orchestrator | Thursday 19 February 2026 06:44:38 +0000 (0:00:01.141) 1:01:24.949 ***** 2026-02-19 06:44:51.071981 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-19 06:44:51.071990 | orchestrator | 2026-02-19 06:44:51.071998 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:44:51.072012 | orchestrator | Thursday 19 February 2026 06:44:39 +0000 (0:00:01.096) 1:01:26.046 ***** 2026-02-19 06:44:51.072020 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-19 06:44:51.072029 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-19 06:44:51.072036 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-19 06:44:51.072044 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-19 06:44:51.072052 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-19 06:44:51.072060 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-19 06:44:51.072068 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-19 06:44:51.072076 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:44:51.072084 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:44:51.072092 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:44:51.072100 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:44:51.072108 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:44:51.072117 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:44:51.072125 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:44:51.072133 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-19 06:44:51.072141 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-19 06:44:51.072149 | orchestrator | 2026-02-19 06:44:51.072158 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:44:51.072166 | orchestrator | Thursday 19 February 2026 06:44:46 +0000 (0:00:06.651) 1:01:32.698 ***** 2026-02-19 06:44:51.072174 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-19 06:44:51.072182 | orchestrator | 2026-02-19 06:44:51.072190 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 06:44:51.072198 | orchestrator | Thursday 19 February 2026 06:44:47 +0000 (0:00:01.104) 1:01:33.802 ***** 2026-02-19 06:44:51.072207 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:44:51.072215 | orchestrator | 2026-02-19 06:44:51.072224 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 06:44:51.072232 | orchestrator | Thursday 19 February 2026 06:44:49 +0000 (0:00:01.510) 1:01:35.313 ***** 2026-02-19 06:44:51.072240 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:44:51.072248 | orchestrator | 2026-02-19 06:44:51.072256 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:44:51.072272 | orchestrator | Thursday 19 February 2026 06:44:51 +0000 (0:00:01.969) 1:01:37.282 ***** 2026-02-19 06:45:41.428298 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.428528 | orchestrator | 2026-02-19 06:45:41.428565 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:45:41.428585 | orchestrator | Thursday 19 February 2026 06:44:52 +0000 (0:00:01.133) 1:01:38.415 ***** 2026-02-19 06:45:41.428606 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.428625 | orchestrator | 2026-02-19 06:45:41.428645 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:45:41.428666 | orchestrator | Thursday 19 February 2026 06:44:53 +0000 (0:00:01.123) 1:01:39.539 ***** 2026-02-19 06:45:41.428686 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.428704 | orchestrator | 2026-02-19 06:45:41.428734 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:45:41.428745 | orchestrator | Thursday 19 February 2026 06:44:54 +0000 (0:00:01.107) 1:01:40.646 ***** 2026-02-19 06:45:41.428756 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.428791 | orchestrator | 2026-02-19 06:45:41.428803 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:45:41.428816 | orchestrator | Thursday 19 February 2026 06:44:55 +0000 (0:00:01.122) 1:01:41.769 ***** 2026-02-19 06:45:41.428828 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.428841 | orchestrator | 2026-02-19 06:45:41.428853 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:45:41.428868 | orchestrator | Thursday 19 February 2026 06:44:56 +0000 (0:00:01.110) 1:01:42.879 ***** 2026-02-19 06:45:41.428880 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.428893 | orchestrator | 2026-02-19 06:45:41.428905 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:45:41.428918 | orchestrator | Thursday 19 February 2026 06:44:57 +0000 (0:00:01.110) 1:01:43.989 ***** 2026-02-19 06:45:41.428931 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.428944 | orchestrator | 2026-02-19 06:45:41.428957 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:45:41.428970 | orchestrator | Thursday 19 February 2026 06:44:58 +0000 (0:00:01.137) 1:01:45.127 ***** 2026-02-19 06:45:41.428982 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.428995 | orchestrator | 2026-02-19 06:45:41.429006 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:45:41.429017 | orchestrator | Thursday 19 February 2026 06:44:59 +0000 (0:00:01.092) 1:01:46.219 ***** 2026-02-19 06:45:41.429028 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429039 | orchestrator | 2026-02-19 06:45:41.429050 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:45:41.429061 | orchestrator | Thursday 19 February 2026 06:45:01 +0000 (0:00:01.140) 1:01:47.360 ***** 2026-02-19 06:45:41.429072 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429083 | orchestrator | 2026-02-19 06:45:41.429094 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:45:41.429105 | orchestrator | Thursday 19 February 2026 06:45:02 +0000 (0:00:01.127) 1:01:48.487 ***** 2026-02-19 06:45:41.429115 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429126 | orchestrator | 2026-02-19 06:45:41.429137 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:45:41.429148 | orchestrator | Thursday 19 February 2026 06:45:03 +0000 (0:00:01.137) 1:01:49.625 ***** 2026-02-19 06:45:41.429158 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:45:41.429169 | orchestrator | 2026-02-19 06:45:41.429180 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:45:41.429191 | orchestrator | Thursday 19 February 2026 06:45:07 +0000 (0:00:04.513) 1:01:54.138 ***** 2026-02-19 06:45:41.429202 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:45:41.429214 | orchestrator | 2026-02-19 06:45:41.429225 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:45:41.429236 | orchestrator | Thursday 19 February 2026 06:45:09 +0000 (0:00:01.149) 1:01:55.287 ***** 2026-02-19 06:45:41.429251 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-19 06:45:41.429265 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-19 06:45:41.429277 | orchestrator | 2026-02-19 06:45:41.429296 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:45:41.429308 | orchestrator | Thursday 19 February 2026 06:45:14 +0000 (0:00:05.405) 1:02:00.693 ***** 2026-02-19 06:45:41.429318 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429329 | orchestrator | 2026-02-19 06:45:41.429340 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:45:41.429351 | orchestrator | Thursday 19 February 2026 06:45:15 +0000 (0:00:01.101) 1:02:01.794 ***** 2026-02-19 06:45:41.429362 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429372 | orchestrator | 2026-02-19 06:45:41.429383 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:45:41.429444 | orchestrator | Thursday 19 February 2026 06:45:16 +0000 (0:00:01.131) 1:02:02.926 ***** 2026-02-19 06:45:41.429460 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429471 | orchestrator | 2026-02-19 06:45:41.429482 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:45:41.429493 | orchestrator | Thursday 19 February 2026 06:45:17 +0000 (0:00:01.153) 1:02:04.079 ***** 2026-02-19 06:45:41.429504 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429514 | orchestrator | 2026-02-19 06:45:41.429525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:45:41.429536 | orchestrator | Thursday 19 February 2026 06:45:18 +0000 (0:00:01.144) 1:02:05.224 ***** 2026-02-19 06:45:41.429553 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429564 | orchestrator | 2026-02-19 06:45:41.429575 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:45:41.429586 | orchestrator | Thursday 19 February 2026 06:45:20 +0000 (0:00:01.130) 1:02:06.355 ***** 2026-02-19 06:45:41.429597 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:45:41.429609 | orchestrator | 2026-02-19 06:45:41.429619 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:45:41.429630 | orchestrator | Thursday 19 February 2026 06:45:21 +0000 (0:00:01.248) 1:02:07.603 ***** 2026-02-19 06:45:41.429641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:45:41.429652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:45:41.429663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:45:41.429674 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429685 | orchestrator | 2026-02-19 06:45:41.429696 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:45:41.429707 | orchestrator | Thursday 19 February 2026 06:45:22 +0000 (0:00:01.456) 1:02:09.060 ***** 2026-02-19 06:45:41.429718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:45:41.429729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:45:41.429740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:45:41.429751 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429762 | orchestrator | 2026-02-19 06:45:41.429772 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:45:41.429783 | orchestrator | Thursday 19 February 2026 06:45:24 +0000 (0:00:01.486) 1:02:10.546 ***** 2026-02-19 06:45:41.429794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-19 06:45:41.429805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-19 06:45:41.429816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-19 06:45:41.429826 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.429837 | orchestrator | 2026-02-19 06:45:41.429848 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:45:41.429859 | orchestrator | Thursday 19 February 2026 06:45:25 +0000 (0:00:01.414) 1:02:11.960 ***** 2026-02-19 06:45:41.429870 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:45:41.429881 | orchestrator | 2026-02-19 06:45:41.429892 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:45:41.429912 | orchestrator | Thursday 19 February 2026 06:45:26 +0000 (0:00:01.150) 1:02:13.111 ***** 2026-02-19 06:45:41.429923 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-19 06:45:41.429934 | orchestrator | 2026-02-19 06:45:41.429945 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:45:41.429955 | orchestrator | Thursday 19 February 2026 06:45:28 +0000 (0:00:01.317) 1:02:14.428 ***** 2026-02-19 06:45:41.429966 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:45:41.429977 | orchestrator | 2026-02-19 06:45:41.429988 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-19 06:45:41.429999 | orchestrator | Thursday 19 February 2026 06:45:30 +0000 (0:00:01.815) 1:02:16.244 ***** 2026-02-19 06:45:41.430009 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-19 06:45:41.430082 | orchestrator | 2026-02-19 06:45:41.430093 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-19 06:45:41.430104 | orchestrator | Thursday 19 February 2026 06:45:31 +0000 (0:00:01.478) 1:02:17.723 ***** 2026-02-19 06:45:41.430115 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:45:41.430125 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 06:45:41.430136 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:45:41.430147 | orchestrator | 2026-02-19 06:45:41.430158 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:45:41.430169 | orchestrator | Thursday 19 February 2026 06:45:35 +0000 (0:00:03.671) 1:02:21.394 ***** 2026-02-19 06:45:41.430179 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-19 06:45:41.430191 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-19 06:45:41.430201 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:45:41.430212 | orchestrator | 2026-02-19 06:45:41.430223 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-19 06:45:41.430234 | orchestrator | Thursday 19 February 2026 06:45:37 +0000 (0:00:01.970) 1:02:23.365 ***** 2026-02-19 06:45:41.430245 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:45:41.430256 | orchestrator | 2026-02-19 06:45:41.430267 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-19 06:45:41.430278 | orchestrator | Thursday 19 February 2026 06:45:38 +0000 (0:00:01.114) 1:02:24.479 ***** 2026-02-19 06:45:41.430289 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-19 06:45:41.430301 | orchestrator | 2026-02-19 06:45:41.430312 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-19 06:45:41.430323 | orchestrator | Thursday 19 February 2026 06:45:39 +0000 (0:00:01.455) 1:02:25.935 ***** 2026-02-19 06:45:41.430342 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:46:57.980348 | orchestrator | 2026-02-19 06:46:57.980523 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-19 06:46:57.980550 | orchestrator | Thursday 19 February 2026 06:45:41 +0000 (0:00:01.707) 1:02:27.642 ***** 2026-02-19 06:46:57.980568 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:46:57.980587 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-19 06:46:57.980605 | orchestrator | 2026-02-19 06:46:57.980645 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-19 06:46:57.980664 | orchestrator | Thursday 19 February 2026 06:45:46 +0000 (0:00:05.363) 1:02:33.005 ***** 2026-02-19 06:46:57.980682 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:46:57.980701 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:46:57.980718 | orchestrator | 2026-02-19 06:46:57.980736 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:46:57.980784 | orchestrator | Thursday 19 February 2026 06:45:50 +0000 (0:00:03.385) 1:02:36.391 ***** 2026-02-19 06:46:57.980797 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-19 06:46:57.980809 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:46:57.980827 | orchestrator | 2026-02-19 06:46:57.980844 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-19 06:46:57.980860 | orchestrator | Thursday 19 February 2026 06:45:52 +0000 (0:00:02.078) 1:02:38.469 ***** 2026-02-19 06:46:57.980878 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-19 06:46:57.980893 | orchestrator | 2026-02-19 06:46:57.980910 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-19 06:46:57.980928 | orchestrator | Thursday 19 February 2026 06:45:53 +0000 (0:00:01.558) 1:02:40.028 ***** 2026-02-19 06:46:57.980944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.980956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.980969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.980987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.981005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.981022 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:46:57.981039 | orchestrator | 2026-02-19 06:46:57.981055 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-19 06:46:57.981066 | orchestrator | Thursday 19 February 2026 06:45:55 +0000 (0:00:01.672) 1:02:41.700 ***** 2026-02-19 06:46:57.981078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.981089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.981100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.981112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.981123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:46:57.981134 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:46:57.981143 | orchestrator | 2026-02-19 06:46:57.981153 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-19 06:46:57.981163 | orchestrator | Thursday 19 February 2026 06:45:57 +0000 (0:00:01.574) 1:02:43.274 ***** 2026-02-19 06:46:57.981172 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:46:57.981184 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:46:57.981194 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:46:57.981204 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:46:57.981215 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:46:57.981233 | orchestrator | 2026-02-19 06:46:57.981243 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-19 06:46:57.981274 | orchestrator | Thursday 19 February 2026 06:46:30 +0000 (0:00:33.806) 1:03:17.081 ***** 2026-02-19 06:46:57.981284 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:46:57.981294 | orchestrator | 2026-02-19 06:46:57.981304 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-19 06:46:57.981314 | orchestrator | Thursday 19 February 2026 06:46:32 +0000 (0:00:01.161) 1:03:18.242 ***** 2026-02-19 06:46:57.981323 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:46:57.981333 | orchestrator | 2026-02-19 06:46:57.981342 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-19 06:46:57.981358 | orchestrator | Thursday 19 February 2026 06:46:33 +0000 (0:00:01.114) 1:03:19.357 ***** 2026-02-19 06:46:57.981368 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-19 06:46:57.981378 | orchestrator | 2026-02-19 06:46:57.981387 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-19 06:46:57.981434 | orchestrator | Thursday 19 February 2026 06:46:34 +0000 (0:00:01.446) 1:03:20.803 ***** 2026-02-19 06:46:57.981453 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-19 06:46:57.981465 | orchestrator | 2026-02-19 06:46:57.981474 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-19 06:46:57.981484 | orchestrator | Thursday 19 February 2026 06:46:36 +0000 (0:00:01.548) 1:03:22.352 ***** 2026-02-19 06:46:57.981493 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:46:57.981503 | orchestrator | 2026-02-19 06:46:57.981512 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-19 06:46:57.981522 | orchestrator | Thursday 19 February 2026 06:46:38 +0000 (0:00:02.019) 1:03:24.372 ***** 2026-02-19 06:46:57.981531 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:46:57.981541 | orchestrator | 2026-02-19 06:46:57.981550 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-19 06:46:57.981560 | orchestrator | Thursday 19 February 2026 06:46:40 +0000 (0:00:01.904) 1:03:26.276 ***** 2026-02-19 06:46:57.981569 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:46:57.981579 | orchestrator | 2026-02-19 06:46:57.981588 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-19 06:46:57.981598 | orchestrator | Thursday 19 February 2026 06:46:42 +0000 (0:00:02.192) 1:03:28.469 ***** 2026-02-19 06:46:57.981607 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-19 06:46:57.981617 | orchestrator | 2026-02-19 06:46:57.981627 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-19 06:46:57.981636 | orchestrator | 2026-02-19 06:46:57.981651 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:46:57.981667 | orchestrator | Thursday 19 February 2026 06:46:45 +0000 (0:00:03.172) 1:03:31.641 ***** 2026-02-19 06:46:57.981682 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-19 06:46:57.981697 | orchestrator | 2026-02-19 06:46:57.981712 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:46:57.981729 | orchestrator | Thursday 19 February 2026 06:46:46 +0000 (0:00:01.108) 1:03:32.750 ***** 2026-02-19 06:46:57.981745 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:46:57.981761 | orchestrator | 2026-02-19 06:46:57.981777 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:46:57.981823 | orchestrator | Thursday 19 February 2026 06:46:47 +0000 (0:00:01.436) 1:03:34.186 ***** 2026-02-19 06:46:57.981839 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:46:57.981855 | orchestrator | 2026-02-19 06:46:57.981871 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:46:57.981900 | orchestrator | Thursday 19 February 2026 06:46:49 +0000 (0:00:01.144) 1:03:35.331 ***** 2026-02-19 06:46:57.981916 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:46:57.981932 | orchestrator | 2026-02-19 06:46:57.981949 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:46:57.981965 | orchestrator | Thursday 19 February 2026 06:46:50 +0000 (0:00:01.481) 1:03:36.813 ***** 2026-02-19 06:46:57.981980 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:46:57.981997 | orchestrator | 2026-02-19 06:46:57.982014 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:46:57.982113 | orchestrator | Thursday 19 February 2026 06:46:51 +0000 (0:00:01.121) 1:03:37.935 ***** 2026-02-19 06:46:57.982130 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:46:57.982147 | orchestrator | 2026-02-19 06:46:57.982163 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:46:57.982180 | orchestrator | Thursday 19 February 2026 06:46:52 +0000 (0:00:01.169) 1:03:39.104 ***** 2026-02-19 06:46:57.982197 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:46:57.982213 | orchestrator | 2026-02-19 06:46:57.982230 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:46:57.982248 | orchestrator | Thursday 19 February 2026 06:46:54 +0000 (0:00:01.142) 1:03:40.247 ***** 2026-02-19 06:46:57.982265 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:46:57.982283 | orchestrator | 2026-02-19 06:46:57.982300 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:46:57.982311 | orchestrator | Thursday 19 February 2026 06:46:55 +0000 (0:00:01.139) 1:03:41.386 ***** 2026-02-19 06:46:57.982326 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:46:57.982343 | orchestrator | 2026-02-19 06:46:57.982359 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:46:57.982376 | orchestrator | Thursday 19 February 2026 06:46:56 +0000 (0:00:01.118) 1:03:42.505 ***** 2026-02-19 06:46:57.982430 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:46:57.982444 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:46:57.982453 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:46:57.982463 | orchestrator | 2026-02-19 06:46:57.982473 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:46:57.982497 | orchestrator | Thursday 19 February 2026 06:46:57 +0000 (0:00:01.682) 1:03:44.188 ***** 2026-02-19 06:47:23.503417 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:23.503532 | orchestrator | 2026-02-19 06:47:23.503551 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:47:23.503564 | orchestrator | Thursday 19 February 2026 06:46:59 +0000 (0:00:01.244) 1:03:45.432 ***** 2026-02-19 06:47:23.503576 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:47:23.503604 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:47:23.503616 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:47:23.503627 | orchestrator | 2026-02-19 06:47:23.503638 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:47:23.503649 | orchestrator | Thursday 19 February 2026 06:47:02 +0000 (0:00:02.832) 1:03:48.265 ***** 2026-02-19 06:47:23.503661 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 06:47:23.503672 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 06:47:23.503683 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 06:47:23.503694 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.503705 | orchestrator | 2026-02-19 06:47:23.503716 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:47:23.503727 | orchestrator | Thursday 19 February 2026 06:47:03 +0000 (0:00:01.404) 1:03:49.669 ***** 2026-02-19 06:47:23.503762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:47:23.503781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:47:23.503801 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:47:23.503819 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.503837 | orchestrator | 2026-02-19 06:47:23.503854 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:47:23.503871 | orchestrator | Thursday 19 February 2026 06:47:05 +0000 (0:00:01.583) 1:03:51.253 ***** 2026-02-19 06:47:23.503892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:23.503912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:23.503930 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:23.503951 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.503969 | orchestrator | 2026-02-19 06:47:23.503987 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:47:23.504006 | orchestrator | Thursday 19 February 2026 06:47:06 +0000 (0:00:01.164) 1:03:52.418 ***** 2026-02-19 06:47:23.504052 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:46:59.749801', 'end': '2026-02-19 06:46:59.798257', 'delta': '0:00:00.048456', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:47:23.504088 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:47:00.318892', 'end': '2026-02-19 06:47:00.360572', 'delta': '0:00:00.041680', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:47:23.504125 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:47:00.863852', 'end': '2026-02-19 06:47:00.912035', 'delta': '0:00:00.048183', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:47:23.504147 | orchestrator | 2026-02-19 06:47:23.504163 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:47:23.504175 | orchestrator | Thursday 19 February 2026 06:47:07 +0000 (0:00:01.167) 1:03:53.586 ***** 2026-02-19 06:47:23.504188 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:23.504200 | orchestrator | 2026-02-19 06:47:23.504213 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:47:23.504226 | orchestrator | Thursday 19 February 2026 06:47:08 +0000 (0:00:01.254) 1:03:54.840 ***** 2026-02-19 06:47:23.504239 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.504251 | orchestrator | 2026-02-19 06:47:23.504265 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:47:23.504285 | orchestrator | Thursday 19 February 2026 06:47:09 +0000 (0:00:01.230) 1:03:56.071 ***** 2026-02-19 06:47:23.504325 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:23.504359 | orchestrator | 2026-02-19 06:47:23.504380 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:47:23.504451 | orchestrator | Thursday 19 February 2026 06:47:11 +0000 (0:00:01.166) 1:03:57.238 ***** 2026-02-19 06:47:23.504470 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:47:23.504488 | orchestrator | 2026-02-19 06:47:23.504507 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:47:23.504526 | orchestrator | Thursday 19 February 2026 06:47:13 +0000 (0:00:02.985) 1:04:00.224 ***** 2026-02-19 06:47:23.504545 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:23.504557 | orchestrator | 2026-02-19 06:47:23.504571 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:47:23.504589 | orchestrator | Thursday 19 February 2026 06:47:15 +0000 (0:00:01.124) 1:04:01.348 ***** 2026-02-19 06:47:23.504607 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.504625 | orchestrator | 2026-02-19 06:47:23.504642 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:47:23.504659 | orchestrator | Thursday 19 February 2026 06:47:16 +0000 (0:00:01.125) 1:04:02.474 ***** 2026-02-19 06:47:23.504679 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.504698 | orchestrator | 2026-02-19 06:47:23.504717 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:47:23.504736 | orchestrator | Thursday 19 February 2026 06:47:17 +0000 (0:00:01.554) 1:04:04.029 ***** 2026-02-19 06:47:23.504747 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.504758 | orchestrator | 2026-02-19 06:47:23.504768 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:47:23.504779 | orchestrator | Thursday 19 February 2026 06:47:18 +0000 (0:00:01.121) 1:04:05.151 ***** 2026-02-19 06:47:23.504790 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.504800 | orchestrator | 2026-02-19 06:47:23.504811 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:47:23.504833 | orchestrator | Thursday 19 February 2026 06:47:20 +0000 (0:00:01.125) 1:04:06.276 ***** 2026-02-19 06:47:23.504844 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:23.504854 | orchestrator | 2026-02-19 06:47:23.504865 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:47:23.504875 | orchestrator | Thursday 19 February 2026 06:47:21 +0000 (0:00:01.182) 1:04:07.458 ***** 2026-02-19 06:47:23.504886 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:23.504897 | orchestrator | 2026-02-19 06:47:23.504907 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:47:23.504918 | orchestrator | Thursday 19 February 2026 06:47:22 +0000 (0:00:01.118) 1:04:08.576 ***** 2026-02-19 06:47:23.504929 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:23.504939 | orchestrator | 2026-02-19 06:47:23.504950 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:47:23.504972 | orchestrator | Thursday 19 February 2026 06:47:23 +0000 (0:00:01.142) 1:04:09.719 ***** 2026-02-19 06:47:25.979181 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:25.979305 | orchestrator | 2026-02-19 06:47:25.979325 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:47:25.979339 | orchestrator | Thursday 19 February 2026 06:47:24 +0000 (0:00:01.113) 1:04:10.832 ***** 2026-02-19 06:47:25.979352 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:25.979364 | orchestrator | 2026-02-19 06:47:25.979448 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:47:25.979464 | orchestrator | Thursday 19 February 2026 06:47:25 +0000 (0:00:01.153) 1:04:11.985 ***** 2026-02-19 06:47:25.979484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:47:25.979509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'uuids': ['a59715b7-019c-4dda-9336-d3b7804a06c1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf']}})  2026-02-19 06:47:25.979533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85ad02dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:47:25.979555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02']}})  2026-02-19 06:47:25.979603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:47:25.979621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:47:25.979660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-20-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:47:25.979673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:47:25.979692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww', 'dm-uuid-CRYPT-LUKS2-f68538a13fa347dc9b85a13ec62262c1-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:47:25.979712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:47:25.979732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'uuids': ['f68538a1-3fa3-47dc-9b85-a13ec62262c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww']}})  2026-02-19 06:47:25.979751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160']}})  2026-02-19 06:47:25.979783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:47:25.979878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '28e9d7a7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:47:27.328593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:47:27.328690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:47:27.328728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf', 'dm-uuid-CRYPT-LUKS2-a59715b7019c4dda9336d3b7804a06c1-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:47:27.328743 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:27.328756 | orchestrator | 2026-02-19 06:47:27.328767 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:47:27.328778 | orchestrator | Thursday 19 February 2026 06:47:27 +0000 (0:00:01.340) 1:04:13.326 ***** 2026-02-19 06:47:27.328789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:27.328815 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160', 'dm-uuid-LVM-rZldl4LmlLXg6d7bs7fyJX4wA6bTnXoE36sCfZeCCq67ndja1fQrkP9qxd3UF2mf'], 'uuids': ['a59715b7-019c-4dda-9336-d3b7804a06c1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:27.328827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58', 'scsi-SQEMU_QEMU_HARDDISK_85ad02dc-7182-4f7f-aeb0-a64abf6b1c58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85ad02dc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:27.328856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hPAd08-UuBL-3Ygg-jY8a-jEiG-hu1p-INZmAJ', 'scsi-0QEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262', 'scsi-SQEMU_QEMU_HARDDISK_06128b56-8ab2-4257-b6d0-e15d23330262'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:27.328877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:27.328888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:27.328903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-20-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:27.328914 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:27.328931 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww', 'dm-uuid-CRYPT-LUKS2-f68538a13fa347dc9b85a13ec62262c1-HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620810 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--64a1f4ab--0c55--53ad--929a--fda4cfe46a02-osd--block--64a1f4ab--0c55--53ad--929a--fda4cfe46a02', 'dm-uuid-LVM-av3z15qCzrck2TCuh26quy9SxGc4Uj0HHGk96w6thbK5NXQZcgefX0YYJ6eJW1Ww'], 'uuids': ['f68538a1-3fa3-47dc-9b85-a13ec62262c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '06128b56', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HGk96w-6thb-K5NX-QZcg-efX0-YYJ6-eJW1Ww']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6C6XL0-fLb8-YfTA-cysM-yAaf-4LBE-w1N2gW', 'scsi-0QEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0', 'scsi-SQEMU_QEMU_HARDDISK_170e0235-dc73-4e1c-89b5-c2562fe21aa0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '170e0235', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160-osd--block--ac535f4d--dfa1--5efd--bfb5--368e6c7a2160']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '28e9d7a7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_28e9d7a7-0f4d-4da3-8222-650c024604ec-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620869 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf', 'dm-uuid-CRYPT-LUKS2-a59715b7019c4dda9336d3b7804a06c1-36sCfZ-eCCq-67nd-ja1f-QrkP-9qxd-3UF2mf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:47:32.620891 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:47:32.620898 | orchestrator | 2026-02-19 06:47:32.620904 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:47:32.620911 | orchestrator | Thursday 19 February 2026 06:47:28 +0000 (0:00:01.362) 1:04:14.689 ***** 2026-02-19 06:47:32.620916 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:32.620922 | orchestrator | 2026-02-19 06:47:32.620927 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:47:32.620933 | orchestrator | Thursday 19 February 2026 06:47:29 +0000 (0:00:01.506) 1:04:16.196 ***** 2026-02-19 06:47:32.620949 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:32.620958 | orchestrator | 2026-02-19 06:47:32.620966 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:47:32.620975 | orchestrator | Thursday 19 February 2026 06:47:31 +0000 (0:00:01.134) 1:04:17.331 ***** 2026-02-19 06:47:32.620985 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:47:32.620994 | orchestrator | 2026-02-19 06:47:32.621004 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:47:32.621018 | orchestrator | Thursday 19 February 2026 06:47:32 +0000 (0:00:01.509) 1:04:18.840 ***** 2026-02-19 06:48:14.607781 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.607878 | orchestrator | 2026-02-19 06:48:14.607890 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:48:14.607899 | orchestrator | Thursday 19 February 2026 06:47:33 +0000 (0:00:01.136) 1:04:19.977 ***** 2026-02-19 06:48:14.607907 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.607915 | orchestrator | 2026-02-19 06:48:14.607923 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:48:14.607931 | orchestrator | Thursday 19 February 2026 06:47:34 +0000 (0:00:01.243) 1:04:21.220 ***** 2026-02-19 06:48:14.607938 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.607945 | orchestrator | 2026-02-19 06:48:14.607953 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:48:14.607960 | orchestrator | Thursday 19 February 2026 06:47:36 +0000 (0:00:01.225) 1:04:22.445 ***** 2026-02-19 06:48:14.607968 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-19 06:48:14.607976 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-19 06:48:14.607983 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-19 06:48:14.607991 | orchestrator | 2026-02-19 06:48:14.607998 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:48:14.608005 | orchestrator | Thursday 19 February 2026 06:47:37 +0000 (0:00:01.650) 1:04:24.096 ***** 2026-02-19 06:48:14.608013 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-19 06:48:14.608020 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-19 06:48:14.608027 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-19 06:48:14.608034 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608042 | orchestrator | 2026-02-19 06:48:14.608049 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:48:14.608056 | orchestrator | Thursday 19 February 2026 06:47:39 +0000 (0:00:01.226) 1:04:25.323 ***** 2026-02-19 06:48:14.608064 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-19 06:48:14.608071 | orchestrator | 2026-02-19 06:48:14.608079 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:48:14.608088 | orchestrator | Thursday 19 February 2026 06:47:40 +0000 (0:00:01.122) 1:04:26.445 ***** 2026-02-19 06:48:14.608095 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608103 | orchestrator | 2026-02-19 06:48:14.608110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:48:14.608117 | orchestrator | Thursday 19 February 2026 06:47:41 +0000 (0:00:01.137) 1:04:27.582 ***** 2026-02-19 06:48:14.608125 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608132 | orchestrator | 2026-02-19 06:48:14.608140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:48:14.608147 | orchestrator | Thursday 19 February 2026 06:47:42 +0000 (0:00:01.149) 1:04:28.732 ***** 2026-02-19 06:48:14.608154 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608161 | orchestrator | 2026-02-19 06:48:14.608169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:48:14.608176 | orchestrator | Thursday 19 February 2026 06:47:43 +0000 (0:00:01.147) 1:04:29.880 ***** 2026-02-19 06:48:14.608184 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:14.608214 | orchestrator | 2026-02-19 06:48:14.608221 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:48:14.608229 | orchestrator | Thursday 19 February 2026 06:47:44 +0000 (0:00:01.234) 1:04:31.115 ***** 2026-02-19 06:48:14.608236 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:48:14.608255 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:48:14.608262 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:48:14.608270 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608277 | orchestrator | 2026-02-19 06:48:14.608284 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:48:14.608292 | orchestrator | Thursday 19 February 2026 06:47:46 +0000 (0:00:01.815) 1:04:32.930 ***** 2026-02-19 06:48:14.608299 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:48:14.608306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:48:14.608313 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:48:14.608321 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608328 | orchestrator | 2026-02-19 06:48:14.608335 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:48:14.608343 | orchestrator | Thursday 19 February 2026 06:47:48 +0000 (0:00:01.701) 1:04:34.632 ***** 2026-02-19 06:48:14.608350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:48:14.608357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:48:14.608365 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:48:14.608372 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608396 | orchestrator | 2026-02-19 06:48:14.608404 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:48:14.608411 | orchestrator | Thursday 19 February 2026 06:47:50 +0000 (0:00:01.700) 1:04:36.332 ***** 2026-02-19 06:48:14.608418 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:14.608426 | orchestrator | 2026-02-19 06:48:14.608433 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:48:14.608440 | orchestrator | Thursday 19 February 2026 06:47:51 +0000 (0:00:01.180) 1:04:37.513 ***** 2026-02-19 06:48:14.608447 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 06:48:14.608454 | orchestrator | 2026-02-19 06:48:14.608461 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:48:14.608468 | orchestrator | Thursday 19 February 2026 06:47:52 +0000 (0:00:01.351) 1:04:38.864 ***** 2026-02-19 06:48:14.608488 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:48:14.608495 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:48:14.608502 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:48:14.608509 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:48:14.608515 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-19 06:48:14.608527 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:48:14.608537 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:48:14.608544 | orchestrator | 2026-02-19 06:48:14.608550 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:48:14.608557 | orchestrator | Thursday 19 February 2026 06:47:54 +0000 (0:00:01.787) 1:04:40.652 ***** 2026-02-19 06:48:14.608565 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:48:14.608572 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:48:14.608578 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:48:14.608593 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:48:14.608601 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-19 06:48:14.608608 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-19 06:48:14.608615 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:48:14.608622 | orchestrator | 2026-02-19 06:48:14.608629 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-19 06:48:14.608637 | orchestrator | Thursday 19 February 2026 06:47:56 +0000 (0:00:02.230) 1:04:42.882 ***** 2026-02-19 06:48:14.608644 | orchestrator | changed: [testbed-node-4] 2026-02-19 06:48:14.608651 | orchestrator | 2026-02-19 06:48:14.608657 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-19 06:48:14.608664 | orchestrator | Thursday 19 February 2026 06:47:58 +0000 (0:00:01.979) 1:04:44.862 ***** 2026-02-19 06:48:14.608671 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:48:14.608678 | orchestrator | 2026-02-19 06:48:14.608686 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-19 06:48:14.608693 | orchestrator | Thursday 19 February 2026 06:48:01 +0000 (0:00:02.606) 1:04:47.469 ***** 2026-02-19 06:48:14.608701 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:48:14.608709 | orchestrator | 2026-02-19 06:48:14.608717 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:48:14.608724 | orchestrator | Thursday 19 February 2026 06:48:03 +0000 (0:00:02.024) 1:04:49.494 ***** 2026-02-19 06:48:14.608732 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-19 06:48:14.608739 | orchestrator | 2026-02-19 06:48:14.608746 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:48:14.608753 | orchestrator | Thursday 19 February 2026 06:48:04 +0000 (0:00:01.094) 1:04:50.588 ***** 2026-02-19 06:48:14.608766 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-19 06:48:14.608774 | orchestrator | 2026-02-19 06:48:14.608781 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:48:14.608787 | orchestrator | Thursday 19 February 2026 06:48:05 +0000 (0:00:01.187) 1:04:51.776 ***** 2026-02-19 06:48:14.608793 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608800 | orchestrator | 2026-02-19 06:48:14.608808 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:48:14.608816 | orchestrator | Thursday 19 February 2026 06:48:06 +0000 (0:00:01.120) 1:04:52.897 ***** 2026-02-19 06:48:14.608823 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:14.608830 | orchestrator | 2026-02-19 06:48:14.608838 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:48:14.608845 | orchestrator | Thursday 19 February 2026 06:48:08 +0000 (0:00:01.496) 1:04:54.393 ***** 2026-02-19 06:48:14.608853 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:14.608860 | orchestrator | 2026-02-19 06:48:14.608867 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:48:14.608874 | orchestrator | Thursday 19 February 2026 06:48:09 +0000 (0:00:01.514) 1:04:55.908 ***** 2026-02-19 06:48:14.608879 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:14.608883 | orchestrator | 2026-02-19 06:48:14.608888 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:48:14.608892 | orchestrator | Thursday 19 February 2026 06:48:11 +0000 (0:00:01.551) 1:04:57.459 ***** 2026-02-19 06:48:14.608896 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608901 | orchestrator | 2026-02-19 06:48:14.608905 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:48:14.608909 | orchestrator | Thursday 19 February 2026 06:48:12 +0000 (0:00:01.129) 1:04:58.589 ***** 2026-02-19 06:48:14.608920 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608924 | orchestrator | 2026-02-19 06:48:14.608929 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:48:14.608933 | orchestrator | Thursday 19 February 2026 06:48:13 +0000 (0:00:01.109) 1:04:59.698 ***** 2026-02-19 06:48:14.608937 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:14.608941 | orchestrator | 2026-02-19 06:48:14.608946 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:48:14.608958 | orchestrator | Thursday 19 February 2026 06:48:14 +0000 (0:00:01.123) 1:05:00.821 ***** 2026-02-19 06:48:54.125611 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.125749 | orchestrator | 2026-02-19 06:48:54.125768 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:48:54.125782 | orchestrator | Thursday 19 February 2026 06:48:16 +0000 (0:00:01.528) 1:05:02.349 ***** 2026-02-19 06:48:54.125795 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.125806 | orchestrator | 2026-02-19 06:48:54.125828 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:48:54.125840 | orchestrator | Thursday 19 February 2026 06:48:17 +0000 (0:00:01.565) 1:05:03.915 ***** 2026-02-19 06:48:54.125851 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.125864 | orchestrator | 2026-02-19 06:48:54.125875 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:48:54.125886 | orchestrator | Thursday 19 February 2026 06:48:18 +0000 (0:00:00.759) 1:05:04.675 ***** 2026-02-19 06:48:54.125897 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.125908 | orchestrator | 2026-02-19 06:48:54.125919 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:48:54.125930 | orchestrator | Thursday 19 February 2026 06:48:19 +0000 (0:00:00.749) 1:05:05.425 ***** 2026-02-19 06:48:54.125941 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.125951 | orchestrator | 2026-02-19 06:48:54.125962 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:48:54.125973 | orchestrator | Thursday 19 February 2026 06:48:19 +0000 (0:00:00.777) 1:05:06.202 ***** 2026-02-19 06:48:54.125984 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.125995 | orchestrator | 2026-02-19 06:48:54.126005 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:48:54.126067 | orchestrator | Thursday 19 February 2026 06:48:20 +0000 (0:00:00.786) 1:05:06.989 ***** 2026-02-19 06:48:54.126079 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.126090 | orchestrator | 2026-02-19 06:48:54.126100 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:48:54.126111 | orchestrator | Thursday 19 February 2026 06:48:21 +0000 (0:00:00.837) 1:05:07.827 ***** 2026-02-19 06:48:54.126122 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126133 | orchestrator | 2026-02-19 06:48:54.126145 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:48:54.126158 | orchestrator | Thursday 19 February 2026 06:48:22 +0000 (0:00:00.777) 1:05:08.605 ***** 2026-02-19 06:48:54.126170 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126182 | orchestrator | 2026-02-19 06:48:54.126194 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:48:54.126206 | orchestrator | Thursday 19 February 2026 06:48:23 +0000 (0:00:00.768) 1:05:09.373 ***** 2026-02-19 06:48:54.126218 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126231 | orchestrator | 2026-02-19 06:48:54.126243 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:48:54.126256 | orchestrator | Thursday 19 February 2026 06:48:23 +0000 (0:00:00.764) 1:05:10.138 ***** 2026-02-19 06:48:54.126268 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.126280 | orchestrator | 2026-02-19 06:48:54.126292 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:48:54.126304 | orchestrator | Thursday 19 February 2026 06:48:24 +0000 (0:00:00.804) 1:05:10.942 ***** 2026-02-19 06:48:54.126343 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.126356 | orchestrator | 2026-02-19 06:48:54.126389 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:48:54.126402 | orchestrator | Thursday 19 February 2026 06:48:25 +0000 (0:00:00.795) 1:05:11.738 ***** 2026-02-19 06:48:54.126414 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126426 | orchestrator | 2026-02-19 06:48:54.126439 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:48:54.126451 | orchestrator | Thursday 19 February 2026 06:48:26 +0000 (0:00:00.766) 1:05:12.504 ***** 2026-02-19 06:48:54.126463 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126475 | orchestrator | 2026-02-19 06:48:54.126487 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:48:54.126499 | orchestrator | Thursday 19 February 2026 06:48:27 +0000 (0:00:00.801) 1:05:13.306 ***** 2026-02-19 06:48:54.126512 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126529 | orchestrator | 2026-02-19 06:48:54.126550 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:48:54.126569 | orchestrator | Thursday 19 February 2026 06:48:27 +0000 (0:00:00.754) 1:05:14.060 ***** 2026-02-19 06:48:54.126595 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126617 | orchestrator | 2026-02-19 06:48:54.126637 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:48:54.126656 | orchestrator | Thursday 19 February 2026 06:48:28 +0000 (0:00:00.791) 1:05:14.852 ***** 2026-02-19 06:48:54.126675 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126693 | orchestrator | 2026-02-19 06:48:54.126710 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:48:54.126727 | orchestrator | Thursday 19 February 2026 06:48:29 +0000 (0:00:00.791) 1:05:15.643 ***** 2026-02-19 06:48:54.126745 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126765 | orchestrator | 2026-02-19 06:48:54.126786 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:48:54.126806 | orchestrator | Thursday 19 February 2026 06:48:30 +0000 (0:00:00.766) 1:05:16.410 ***** 2026-02-19 06:48:54.126825 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126845 | orchestrator | 2026-02-19 06:48:54.126928 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:48:54.126947 | orchestrator | Thursday 19 February 2026 06:48:31 +0000 (0:00:00.833) 1:05:17.243 ***** 2026-02-19 06:48:54.126959 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.126970 | orchestrator | 2026-02-19 06:48:54.126980 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:48:54.126991 | orchestrator | Thursday 19 February 2026 06:48:31 +0000 (0:00:00.748) 1:05:17.991 ***** 2026-02-19 06:48:54.127002 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127013 | orchestrator | 2026-02-19 06:48:54.127045 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:48:54.127056 | orchestrator | Thursday 19 February 2026 06:48:32 +0000 (0:00:00.772) 1:05:18.764 ***** 2026-02-19 06:48:54.127067 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127078 | orchestrator | 2026-02-19 06:48:54.127089 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:48:54.127100 | orchestrator | Thursday 19 February 2026 06:48:33 +0000 (0:00:00.776) 1:05:19.541 ***** 2026-02-19 06:48:54.127110 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127121 | orchestrator | 2026-02-19 06:48:54.127132 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:48:54.127142 | orchestrator | Thursday 19 February 2026 06:48:34 +0000 (0:00:00.760) 1:05:20.301 ***** 2026-02-19 06:48:54.127153 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127164 | orchestrator | 2026-02-19 06:48:54.127174 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:48:54.127199 | orchestrator | Thursday 19 February 2026 06:48:34 +0000 (0:00:00.766) 1:05:21.068 ***** 2026-02-19 06:48:54.127210 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.127242 | orchestrator | 2026-02-19 06:48:54.127263 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:48:54.127274 | orchestrator | Thursday 19 February 2026 06:48:36 +0000 (0:00:01.609) 1:05:22.677 ***** 2026-02-19 06:48:54.127285 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.127296 | orchestrator | 2026-02-19 06:48:54.127306 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:48:54.127317 | orchestrator | Thursday 19 February 2026 06:48:38 +0000 (0:00:01.871) 1:05:24.548 ***** 2026-02-19 06:48:54.127328 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-19 06:48:54.127340 | orchestrator | 2026-02-19 06:48:54.127351 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:48:54.127362 | orchestrator | Thursday 19 February 2026 06:48:39 +0000 (0:00:01.105) 1:05:25.654 ***** 2026-02-19 06:48:54.127479 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127493 | orchestrator | 2026-02-19 06:48:54.127504 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:48:54.127514 | orchestrator | Thursday 19 February 2026 06:48:40 +0000 (0:00:01.144) 1:05:26.798 ***** 2026-02-19 06:48:54.127525 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127536 | orchestrator | 2026-02-19 06:48:54.127547 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:48:54.127557 | orchestrator | Thursday 19 February 2026 06:48:41 +0000 (0:00:01.103) 1:05:27.901 ***** 2026-02-19 06:48:54.127568 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:48:54.127579 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:48:54.127590 | orchestrator | 2026-02-19 06:48:54.127601 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:48:54.127612 | orchestrator | Thursday 19 February 2026 06:48:43 +0000 (0:00:01.844) 1:05:29.746 ***** 2026-02-19 06:48:54.127622 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.127633 | orchestrator | 2026-02-19 06:48:54.127644 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:48:54.127655 | orchestrator | Thursday 19 February 2026 06:48:44 +0000 (0:00:01.477) 1:05:31.223 ***** 2026-02-19 06:48:54.127665 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127676 | orchestrator | 2026-02-19 06:48:54.127687 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:48:54.127705 | orchestrator | Thursday 19 February 2026 06:48:46 +0000 (0:00:01.246) 1:05:32.470 ***** 2026-02-19 06:48:54.127717 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127737 | orchestrator | 2026-02-19 06:48:54.127756 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:48:54.127774 | orchestrator | Thursday 19 February 2026 06:48:47 +0000 (0:00:00.821) 1:05:33.292 ***** 2026-02-19 06:48:54.127793 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.127812 | orchestrator | 2026-02-19 06:48:54.127830 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:48:54.127850 | orchestrator | Thursday 19 February 2026 06:48:47 +0000 (0:00:00.762) 1:05:34.055 ***** 2026-02-19 06:48:54.127871 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-19 06:48:54.127890 | orchestrator | 2026-02-19 06:48:54.127910 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:48:54.127930 | orchestrator | Thursday 19 February 2026 06:48:48 +0000 (0:00:01.108) 1:05:35.163 ***** 2026-02-19 06:48:54.127949 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:48:54.127968 | orchestrator | 2026-02-19 06:48:54.127980 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:48:54.127991 | orchestrator | Thursday 19 February 2026 06:48:50 +0000 (0:00:01.696) 1:05:36.860 ***** 2026-02-19 06:48:54.128013 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:48:54.128024 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:48:54.128034 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:48:54.128045 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.128056 | orchestrator | 2026-02-19 06:48:54.128067 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:48:54.128078 | orchestrator | Thursday 19 February 2026 06:48:51 +0000 (0:00:01.151) 1:05:38.011 ***** 2026-02-19 06:48:54.128088 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.128099 | orchestrator | 2026-02-19 06:48:54.128110 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:48:54.128121 | orchestrator | Thursday 19 February 2026 06:48:52 +0000 (0:00:01.132) 1:05:39.143 ***** 2026-02-19 06:48:54.128132 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:48:54.128143 | orchestrator | 2026-02-19 06:48:54.128166 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:49:36.506117 | orchestrator | Thursday 19 February 2026 06:48:54 +0000 (0:00:01.196) 1:05:40.340 ***** 2026-02-19 06:49:36.506236 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506252 | orchestrator | 2026-02-19 06:49:36.506266 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:49:36.506278 | orchestrator | Thursday 19 February 2026 06:48:55 +0000 (0:00:01.136) 1:05:41.477 ***** 2026-02-19 06:49:36.506289 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506300 | orchestrator | 2026-02-19 06:49:36.506311 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:49:36.506322 | orchestrator | Thursday 19 February 2026 06:48:56 +0000 (0:00:01.132) 1:05:42.609 ***** 2026-02-19 06:49:36.506333 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506344 | orchestrator | 2026-02-19 06:49:36.506356 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:49:36.506447 | orchestrator | Thursday 19 February 2026 06:48:57 +0000 (0:00:00.765) 1:05:43.374 ***** 2026-02-19 06:49:36.506459 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:49:36.506471 | orchestrator | 2026-02-19 06:49:36.506482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:49:36.506494 | orchestrator | Thursday 19 February 2026 06:48:59 +0000 (0:00:02.238) 1:05:45.613 ***** 2026-02-19 06:49:36.506506 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:49:36.506517 | orchestrator | 2026-02-19 06:49:36.506528 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:49:36.506539 | orchestrator | Thursday 19 February 2026 06:49:00 +0000 (0:00:00.752) 1:05:46.365 ***** 2026-02-19 06:49:36.506550 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-19 06:49:36.506562 | orchestrator | 2026-02-19 06:49:36.506573 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:49:36.506584 | orchestrator | Thursday 19 February 2026 06:49:01 +0000 (0:00:01.100) 1:05:47.466 ***** 2026-02-19 06:49:36.506595 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506606 | orchestrator | 2026-02-19 06:49:36.506617 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:49:36.506627 | orchestrator | Thursday 19 February 2026 06:49:02 +0000 (0:00:01.125) 1:05:48.592 ***** 2026-02-19 06:49:36.506639 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506650 | orchestrator | 2026-02-19 06:49:36.506661 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:49:36.506672 | orchestrator | Thursday 19 February 2026 06:49:03 +0000 (0:00:01.103) 1:05:49.695 ***** 2026-02-19 06:49:36.506683 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506694 | orchestrator | 2026-02-19 06:49:36.506705 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:49:36.506742 | orchestrator | Thursday 19 February 2026 06:49:04 +0000 (0:00:01.133) 1:05:50.828 ***** 2026-02-19 06:49:36.506753 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506764 | orchestrator | 2026-02-19 06:49:36.506775 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:49:36.506786 | orchestrator | Thursday 19 February 2026 06:49:05 +0000 (0:00:01.125) 1:05:51.954 ***** 2026-02-19 06:49:36.506797 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506808 | orchestrator | 2026-02-19 06:49:36.506819 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:49:36.506830 | orchestrator | Thursday 19 February 2026 06:49:06 +0000 (0:00:01.152) 1:05:53.106 ***** 2026-02-19 06:49:36.506840 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506851 | orchestrator | 2026-02-19 06:49:36.506878 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:49:36.506890 | orchestrator | Thursday 19 February 2026 06:49:08 +0000 (0:00:01.120) 1:05:54.227 ***** 2026-02-19 06:49:36.506901 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506912 | orchestrator | 2026-02-19 06:49:36.506923 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:49:36.506933 | orchestrator | Thursday 19 February 2026 06:49:09 +0000 (0:00:01.118) 1:05:55.345 ***** 2026-02-19 06:49:36.506944 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.506955 | orchestrator | 2026-02-19 06:49:36.506966 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:49:36.506977 | orchestrator | Thursday 19 February 2026 06:49:10 +0000 (0:00:01.119) 1:05:56.465 ***** 2026-02-19 06:49:36.506988 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:49:36.506999 | orchestrator | 2026-02-19 06:49:36.507010 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:49:36.507021 | orchestrator | Thursday 19 February 2026 06:49:11 +0000 (0:00:00.887) 1:05:57.353 ***** 2026-02-19 06:49:36.507032 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-19 06:49:36.507044 | orchestrator | 2026-02-19 06:49:36.507055 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:49:36.507066 | orchestrator | Thursday 19 February 2026 06:49:12 +0000 (0:00:01.108) 1:05:58.462 ***** 2026-02-19 06:49:36.507077 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-19 06:49:36.507088 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-19 06:49:36.507099 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-19 06:49:36.507110 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-19 06:49:36.507121 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-19 06:49:36.507132 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-19 06:49:36.507142 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-19 06:49:36.507153 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:49:36.507164 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:49:36.507183 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:49:36.507203 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:49:36.507243 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:49:36.507263 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:49:36.507282 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:49:36.507302 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-19 06:49:36.507318 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-19 06:49:36.507329 | orchestrator | 2026-02-19 06:49:36.507340 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:49:36.507351 | orchestrator | Thursday 19 February 2026 06:49:18 +0000 (0:00:06.348) 1:06:04.811 ***** 2026-02-19 06:49:36.507399 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-19 06:49:36.507411 | orchestrator | 2026-02-19 06:49:36.507422 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 06:49:36.507433 | orchestrator | Thursday 19 February 2026 06:49:19 +0000 (0:00:01.103) 1:06:05.915 ***** 2026-02-19 06:49:36.507444 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:49:36.507456 | orchestrator | 2026-02-19 06:49:36.507467 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 06:49:36.507478 | orchestrator | Thursday 19 February 2026 06:49:21 +0000 (0:00:01.495) 1:06:07.410 ***** 2026-02-19 06:49:36.507489 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:49:36.507500 | orchestrator | 2026-02-19 06:49:36.507510 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:49:36.507521 | orchestrator | Thursday 19 February 2026 06:49:22 +0000 (0:00:01.672) 1:06:09.083 ***** 2026-02-19 06:49:36.507532 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507543 | orchestrator | 2026-02-19 06:49:36.507554 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:49:36.507565 | orchestrator | Thursday 19 February 2026 06:49:23 +0000 (0:00:00.787) 1:06:09.870 ***** 2026-02-19 06:49:36.507576 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507586 | orchestrator | 2026-02-19 06:49:36.507597 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:49:36.507608 | orchestrator | Thursday 19 February 2026 06:49:24 +0000 (0:00:00.821) 1:06:10.691 ***** 2026-02-19 06:49:36.507619 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507630 | orchestrator | 2026-02-19 06:49:36.507641 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:49:36.507651 | orchestrator | Thursday 19 February 2026 06:49:25 +0000 (0:00:00.772) 1:06:11.464 ***** 2026-02-19 06:49:36.507662 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507673 | orchestrator | 2026-02-19 06:49:36.507684 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:49:36.507694 | orchestrator | Thursday 19 February 2026 06:49:26 +0000 (0:00:00.806) 1:06:12.270 ***** 2026-02-19 06:49:36.507705 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507716 | orchestrator | 2026-02-19 06:49:36.507727 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:49:36.507738 | orchestrator | Thursday 19 February 2026 06:49:26 +0000 (0:00:00.774) 1:06:13.045 ***** 2026-02-19 06:49:36.507749 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507760 | orchestrator | 2026-02-19 06:49:36.507776 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:49:36.507788 | orchestrator | Thursday 19 February 2026 06:49:27 +0000 (0:00:00.770) 1:06:13.816 ***** 2026-02-19 06:49:36.507799 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507810 | orchestrator | 2026-02-19 06:49:36.507820 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:49:36.507831 | orchestrator | Thursday 19 February 2026 06:49:28 +0000 (0:00:00.782) 1:06:14.599 ***** 2026-02-19 06:49:36.507842 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507853 | orchestrator | 2026-02-19 06:49:36.507864 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:49:36.507874 | orchestrator | Thursday 19 February 2026 06:49:29 +0000 (0:00:00.780) 1:06:15.379 ***** 2026-02-19 06:49:36.507885 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507896 | orchestrator | 2026-02-19 06:49:36.507907 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:49:36.507924 | orchestrator | Thursday 19 February 2026 06:49:29 +0000 (0:00:00.756) 1:06:16.136 ***** 2026-02-19 06:49:36.507935 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507946 | orchestrator | 2026-02-19 06:49:36.507957 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:49:36.507968 | orchestrator | Thursday 19 February 2026 06:49:30 +0000 (0:00:00.759) 1:06:16.896 ***** 2026-02-19 06:49:36.507978 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:49:36.507989 | orchestrator | 2026-02-19 06:49:36.508000 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:49:36.508011 | orchestrator | Thursday 19 February 2026 06:49:31 +0000 (0:00:00.787) 1:06:17.683 ***** 2026-02-19 06:49:36.508021 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:49:36.508032 | orchestrator | 2026-02-19 06:49:36.508043 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:49:36.508054 | orchestrator | Thursday 19 February 2026 06:49:35 +0000 (0:00:04.199) 1:06:21.883 ***** 2026-02-19 06:49:36.508065 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:49:36.508076 | orchestrator | 2026-02-19 06:49:36.508094 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:50:18.269036 | orchestrator | Thursday 19 February 2026 06:49:36 +0000 (0:00:00.836) 1:06:22.720 ***** 2026-02-19 06:50:18.269167 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-19 06:50:18.269180 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-19 06:50:18.269190 | orchestrator | 2026-02-19 06:50:18.269198 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:50:18.269205 | orchestrator | Thursday 19 February 2026 06:49:41 +0000 (0:00:04.674) 1:06:27.394 ***** 2026-02-19 06:50:18.269212 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269220 | orchestrator | 2026-02-19 06:50:18.269227 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:50:18.269233 | orchestrator | Thursday 19 February 2026 06:49:41 +0000 (0:00:00.768) 1:06:28.163 ***** 2026-02-19 06:50:18.269239 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269245 | orchestrator | 2026-02-19 06:50:18.269252 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:50:18.269260 | orchestrator | Thursday 19 February 2026 06:49:42 +0000 (0:00:00.776) 1:06:28.940 ***** 2026-02-19 06:50:18.269267 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269273 | orchestrator | 2026-02-19 06:50:18.269279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:50:18.269285 | orchestrator | Thursday 19 February 2026 06:49:43 +0000 (0:00:00.784) 1:06:29.724 ***** 2026-02-19 06:50:18.269291 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269297 | orchestrator | 2026-02-19 06:50:18.269304 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:50:18.269310 | orchestrator | Thursday 19 February 2026 06:49:44 +0000 (0:00:00.770) 1:06:30.495 ***** 2026-02-19 06:50:18.269316 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269322 | orchestrator | 2026-02-19 06:50:18.269328 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:50:18.269334 | orchestrator | Thursday 19 February 2026 06:49:45 +0000 (0:00:00.795) 1:06:31.290 ***** 2026-02-19 06:50:18.269422 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:50:18.269432 | orchestrator | 2026-02-19 06:50:18.269438 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:50:18.269445 | orchestrator | Thursday 19 February 2026 06:49:46 +0000 (0:00:01.352) 1:06:32.642 ***** 2026-02-19 06:50:18.269451 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:50:18.269457 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:50:18.269463 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:50:18.269485 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269492 | orchestrator | 2026-02-19 06:50:18.269498 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:50:18.269505 | orchestrator | Thursday 19 February 2026 06:49:47 +0000 (0:00:01.065) 1:06:33.708 ***** 2026-02-19 06:50:18.269511 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:50:18.269517 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:50:18.269523 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:50:18.269529 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269535 | orchestrator | 2026-02-19 06:50:18.269542 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:50:18.269549 | orchestrator | Thursday 19 February 2026 06:49:48 +0000 (0:00:01.064) 1:06:34.772 ***** 2026-02-19 06:50:18.269555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-19 06:50:18.269562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-19 06:50:18.269569 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-19 06:50:18.269576 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269582 | orchestrator | 2026-02-19 06:50:18.269590 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:50:18.269596 | orchestrator | Thursday 19 February 2026 06:49:49 +0000 (0:00:01.086) 1:06:35.858 ***** 2026-02-19 06:50:18.269603 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:50:18.269610 | orchestrator | 2026-02-19 06:50:18.269617 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:50:18.269624 | orchestrator | Thursday 19 February 2026 06:49:50 +0000 (0:00:00.781) 1:06:36.640 ***** 2026-02-19 06:50:18.269630 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-19 06:50:18.269637 | orchestrator | 2026-02-19 06:50:18.269644 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:50:18.269651 | orchestrator | Thursday 19 February 2026 06:49:51 +0000 (0:00:00.992) 1:06:37.632 ***** 2026-02-19 06:50:18.269657 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:50:18.269664 | orchestrator | 2026-02-19 06:50:18.269671 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-19 06:50:18.269678 | orchestrator | Thursday 19 February 2026 06:49:52 +0000 (0:00:01.393) 1:06:39.025 ***** 2026-02-19 06:50:18.269685 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-19 06:50:18.269692 | orchestrator | 2026-02-19 06:50:18.269715 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-19 06:50:18.269722 | orchestrator | Thursday 19 February 2026 06:49:53 +0000 (0:00:01.048) 1:06:40.073 ***** 2026-02-19 06:50:18.269729 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:50:18.269736 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-19 06:50:18.269743 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:50:18.269750 | orchestrator | 2026-02-19 06:50:18.269757 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:50:18.269764 | orchestrator | Thursday 19 February 2026 06:49:57 +0000 (0:00:03.329) 1:06:43.403 ***** 2026-02-19 06:50:18.269771 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-19 06:50:18.269784 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-19 06:50:18.269791 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:50:18.269798 | orchestrator | 2026-02-19 06:50:18.269805 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-19 06:50:18.269812 | orchestrator | Thursday 19 February 2026 06:49:59 +0000 (0:00:02.003) 1:06:45.407 ***** 2026-02-19 06:50:18.269819 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.269826 | orchestrator | 2026-02-19 06:50:18.269833 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-19 06:50:18.269839 | orchestrator | Thursday 19 February 2026 06:50:00 +0000 (0:00:00.847) 1:06:46.254 ***** 2026-02-19 06:50:18.269846 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-19 06:50:18.269854 | orchestrator | 2026-02-19 06:50:18.269860 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-19 06:50:18.269867 | orchestrator | Thursday 19 February 2026 06:50:01 +0000 (0:00:01.116) 1:06:47.371 ***** 2026-02-19 06:50:18.269874 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:50:18.269883 | orchestrator | 2026-02-19 06:50:18.269890 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-19 06:50:18.269897 | orchestrator | Thursday 19 February 2026 06:50:02 +0000 (0:00:01.657) 1:06:49.029 ***** 2026-02-19 06:50:18.269904 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:50:18.269911 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-19 06:50:18.269917 | orchestrator | 2026-02-19 06:50:18.269923 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-19 06:50:18.269929 | orchestrator | Thursday 19 February 2026 06:50:08 +0000 (0:00:05.595) 1:06:54.625 ***** 2026-02-19 06:50:18.269935 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:50:18.269941 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:50:18.269948 | orchestrator | 2026-02-19 06:50:18.269954 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:50:18.269960 | orchestrator | Thursday 19 February 2026 06:50:11 +0000 (0:00:03.288) 1:06:57.914 ***** 2026-02-19 06:50:18.269966 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-19 06:50:18.269972 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:50:18.269978 | orchestrator | 2026-02-19 06:50:18.269988 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-19 06:50:18.269994 | orchestrator | Thursday 19 February 2026 06:50:13 +0000 (0:00:01.677) 1:06:59.592 ***** 2026-02-19 06:50:18.270000 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-19 06:50:18.270006 | orchestrator | 2026-02-19 06:50:18.270012 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-19 06:50:18.270075 | orchestrator | Thursday 19 February 2026 06:50:14 +0000 (0:00:01.115) 1:07:00.708 ***** 2026-02-19 06:50:18.270081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:50:18.270088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:50:18.270095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:50:18.270101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:50:18.270107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:50:18.270119 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:50:18.270126 | orchestrator | 2026-02-19 06:50:18.270132 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-19 06:50:18.270138 | orchestrator | Thursday 19 February 2026 06:50:16 +0000 (0:00:01.903) 1:07:02.611 ***** 2026-02-19 06:50:18.270144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:50:18.270151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:50:18.270157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:50:18.270167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:51:27.363724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:51:27.363901 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:51:27.363929 | orchestrator | 2026-02-19 06:51:27.363949 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-19 06:51:27.363968 | orchestrator | Thursday 19 February 2026 06:50:18 +0000 (0:00:01.867) 1:07:04.479 ***** 2026-02-19 06:51:27.363986 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:51:27.364005 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:51:27.364022 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:51:27.364039 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:51:27.364058 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:51:27.364074 | orchestrator | 2026-02-19 06:51:27.364090 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-19 06:51:27.364106 | orchestrator | Thursday 19 February 2026 06:50:52 +0000 (0:00:34.183) 1:07:38.662 ***** 2026-02-19 06:51:27.364122 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:51:27.364137 | orchestrator | 2026-02-19 06:51:27.364153 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-19 06:51:27.364169 | orchestrator | Thursday 19 February 2026 06:50:53 +0000 (0:00:00.840) 1:07:39.503 ***** 2026-02-19 06:51:27.364186 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:51:27.364202 | orchestrator | 2026-02-19 06:51:27.364220 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-19 06:51:27.364239 | orchestrator | Thursday 19 February 2026 06:50:54 +0000 (0:00:00.757) 1:07:40.261 ***** 2026-02-19 06:51:27.364258 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-19 06:51:27.364277 | orchestrator | 2026-02-19 06:51:27.364296 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-19 06:51:27.364313 | orchestrator | Thursday 19 February 2026 06:50:55 +0000 (0:00:01.139) 1:07:41.400 ***** 2026-02-19 06:51:27.364330 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-19 06:51:27.364422 | orchestrator | 2026-02-19 06:51:27.364444 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-19 06:51:27.364462 | orchestrator | Thursday 19 February 2026 06:50:56 +0000 (0:00:01.104) 1:07:42.505 ***** 2026-02-19 06:51:27.364480 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:51:27.364537 | orchestrator | 2026-02-19 06:51:27.364576 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-19 06:51:27.364592 | orchestrator | Thursday 19 February 2026 06:50:58 +0000 (0:00:02.034) 1:07:44.540 ***** 2026-02-19 06:51:27.364608 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:51:27.364627 | orchestrator | 2026-02-19 06:51:27.364642 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-19 06:51:27.364656 | orchestrator | Thursday 19 February 2026 06:51:00 +0000 (0:00:01.905) 1:07:46.446 ***** 2026-02-19 06:51:27.364671 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:51:27.364687 | orchestrator | 2026-02-19 06:51:27.364704 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-19 06:51:27.364721 | orchestrator | Thursday 19 February 2026 06:51:02 +0000 (0:00:02.263) 1:07:48.709 ***** 2026-02-19 06:51:27.364738 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-19 06:51:27.364755 | orchestrator | 2026-02-19 06:51:27.364769 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-19 06:51:27.364784 | orchestrator | 2026-02-19 06:51:27.364801 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:51:27.364817 | orchestrator | Thursday 19 February 2026 06:51:05 +0000 (0:00:02.776) 1:07:51.486 ***** 2026-02-19 06:51:27.364833 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-19 06:51:27.364849 | orchestrator | 2026-02-19 06:51:27.364867 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-19 06:51:27.364883 | orchestrator | Thursday 19 February 2026 06:51:06 +0000 (0:00:01.098) 1:07:52.584 ***** 2026-02-19 06:51:27.364898 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:27.364915 | orchestrator | 2026-02-19 06:51:27.364930 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-19 06:51:27.364946 | orchestrator | Thursday 19 February 2026 06:51:07 +0000 (0:00:01.517) 1:07:54.102 ***** 2026-02-19 06:51:27.364962 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:27.364978 | orchestrator | 2026-02-19 06:51:27.364994 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:51:27.365010 | orchestrator | Thursday 19 February 2026 06:51:09 +0000 (0:00:01.148) 1:07:55.251 ***** 2026-02-19 06:51:27.365026 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:27.365042 | orchestrator | 2026-02-19 06:51:27.365059 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:51:27.365194 | orchestrator | Thursday 19 February 2026 06:51:10 +0000 (0:00:01.429) 1:07:56.680 ***** 2026-02-19 06:51:27.365218 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:27.365235 | orchestrator | 2026-02-19 06:51:27.365282 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-19 06:51:27.365299 | orchestrator | Thursday 19 February 2026 06:51:11 +0000 (0:00:01.123) 1:07:57.803 ***** 2026-02-19 06:51:27.365315 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:27.365330 | orchestrator | 2026-02-19 06:51:27.365373 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-19 06:51:27.365389 | orchestrator | Thursday 19 February 2026 06:51:12 +0000 (0:00:01.120) 1:07:58.924 ***** 2026-02-19 06:51:27.365404 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:27.365419 | orchestrator | 2026-02-19 06:51:27.365436 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-19 06:51:27.365453 | orchestrator | Thursday 19 February 2026 06:51:13 +0000 (0:00:01.113) 1:08:00.038 ***** 2026-02-19 06:51:27.365469 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:27.365485 | orchestrator | 2026-02-19 06:51:27.365500 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-19 06:51:27.365518 | orchestrator | Thursday 19 February 2026 06:51:14 +0000 (0:00:01.129) 1:08:01.168 ***** 2026-02-19 06:51:27.365535 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:27.365552 | orchestrator | 2026-02-19 06:51:27.365589 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-19 06:51:27.365606 | orchestrator | Thursday 19 February 2026 06:51:16 +0000 (0:00:01.104) 1:08:02.272 ***** 2026-02-19 06:51:27.365622 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:51:27.365637 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:51:27.365653 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:51:27.365670 | orchestrator | 2026-02-19 06:51:27.365685 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-19 06:51:27.365701 | orchestrator | Thursday 19 February 2026 06:51:17 +0000 (0:00:01.940) 1:08:04.213 ***** 2026-02-19 06:51:27.365719 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:27.365735 | orchestrator | 2026-02-19 06:51:27.365752 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-19 06:51:27.365769 | orchestrator | Thursday 19 February 2026 06:51:19 +0000 (0:00:01.234) 1:08:05.447 ***** 2026-02-19 06:51:27.365784 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:51:27.365801 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:51:27.365817 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:51:27.365834 | orchestrator | 2026-02-19 06:51:27.365851 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-19 06:51:27.365868 | orchestrator | Thursday 19 February 2026 06:51:22 +0000 (0:00:03.200) 1:08:08.647 ***** 2026-02-19 06:51:27.365886 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 06:51:27.365902 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 06:51:27.365918 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 06:51:27.365934 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:27.365951 | orchestrator | 2026-02-19 06:51:27.365967 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-19 06:51:27.365997 | orchestrator | Thursday 19 February 2026 06:51:24 +0000 (0:00:01.811) 1:08:10.459 ***** 2026-02-19 06:51:27.366092 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-19 06:51:27.366122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-19 06:51:27.366140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-19 06:51:27.366158 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:27.366176 | orchestrator | 2026-02-19 06:51:27.366195 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-19 06:51:27.366212 | orchestrator | Thursday 19 February 2026 06:51:26 +0000 (0:00:01.957) 1:08:12.416 ***** 2026-02-19 06:51:27.366232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:27.366275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:45.549876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:45.550009 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:45.550101 | orchestrator | 2026-02-19 06:51:45.550117 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-19 06:51:45.550133 | orchestrator | Thursday 19 February 2026 06:51:27 +0000 (0:00:01.161) 1:08:13.577 ***** 2026-02-19 06:51:45.550149 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'e3a5d710b112', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-19 06:51:19.769058', 'end': '2026-02-19 06:51:19.829553', 'delta': '0:00:00.060495', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3a5d710b112'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-19 06:51:45.550167 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a4335e23f9f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-19 06:51:20.692718', 'end': '2026-02-19 06:51:20.747429', 'delta': '0:00:00.054711', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a4335e23f9f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-19 06:51:45.550199 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '8bdbabe346bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-19 06:51:21.245812', 'end': '2026-02-19 06:51:21.292219', 'delta': '0:00:00.046407', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8bdbabe346bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-19 06:51:45.550214 | orchestrator | 2026-02-19 06:51:45.550229 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-19 06:51:45.550244 | orchestrator | Thursday 19 February 2026 06:51:28 +0000 (0:00:01.169) 1:08:14.747 ***** 2026-02-19 06:51:45.550257 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:45.550272 | orchestrator | 2026-02-19 06:51:45.550286 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-19 06:51:45.550301 | orchestrator | Thursday 19 February 2026 06:51:29 +0000 (0:00:01.254) 1:08:16.001 ***** 2026-02-19 06:51:45.550316 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:45.550331 | orchestrator | 2026-02-19 06:51:45.550372 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-19 06:51:45.550416 | orchestrator | Thursday 19 February 2026 06:51:31 +0000 (0:00:01.285) 1:08:17.287 ***** 2026-02-19 06:51:45.550433 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:45.550449 | orchestrator | 2026-02-19 06:51:45.550465 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-19 06:51:45.550480 | orchestrator | Thursday 19 February 2026 06:51:32 +0000 (0:00:01.144) 1:08:18.431 ***** 2026-02-19 06:51:45.550491 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-19 06:51:45.550501 | orchestrator | 2026-02-19 06:51:45.550509 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:51:45.550518 | orchestrator | Thursday 19 February 2026 06:51:34 +0000 (0:00:01.996) 1:08:20.428 ***** 2026-02-19 06:51:45.550526 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:45.550535 | orchestrator | 2026-02-19 06:51:45.550543 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-19 06:51:45.550552 | orchestrator | Thursday 19 February 2026 06:51:35 +0000 (0:00:01.138) 1:08:21.567 ***** 2026-02-19 06:51:45.550577 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:45.550586 | orchestrator | 2026-02-19 06:51:45.550595 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-19 06:51:45.550604 | orchestrator | Thursday 19 February 2026 06:51:36 +0000 (0:00:01.122) 1:08:22.690 ***** 2026-02-19 06:51:45.550612 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:45.550621 | orchestrator | 2026-02-19 06:51:45.550629 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-19 06:51:45.550637 | orchestrator | Thursday 19 February 2026 06:51:37 +0000 (0:00:01.206) 1:08:23.896 ***** 2026-02-19 06:51:45.550646 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:45.550654 | orchestrator | 2026-02-19 06:51:45.550662 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-19 06:51:45.550671 | orchestrator | Thursday 19 February 2026 06:51:38 +0000 (0:00:01.089) 1:08:24.986 ***** 2026-02-19 06:51:45.550679 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:45.550688 | orchestrator | 2026-02-19 06:51:45.550696 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-19 06:51:45.550705 | orchestrator | Thursday 19 February 2026 06:51:39 +0000 (0:00:01.079) 1:08:26.065 ***** 2026-02-19 06:51:45.550713 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:45.550722 | orchestrator | 2026-02-19 06:51:45.550731 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-19 06:51:45.550739 | orchestrator | Thursday 19 February 2026 06:51:40 +0000 (0:00:01.123) 1:08:27.188 ***** 2026-02-19 06:51:45.550748 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:45.550756 | orchestrator | 2026-02-19 06:51:45.550764 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-19 06:51:45.550773 | orchestrator | Thursday 19 February 2026 06:51:42 +0000 (0:00:01.054) 1:08:28.243 ***** 2026-02-19 06:51:45.550782 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:45.550790 | orchestrator | 2026-02-19 06:51:45.550799 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-19 06:51:45.550807 | orchestrator | Thursday 19 February 2026 06:51:43 +0000 (0:00:01.123) 1:08:29.367 ***** 2026-02-19 06:51:45.550815 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:45.550824 | orchestrator | 2026-02-19 06:51:45.550832 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-19 06:51:45.550842 | orchestrator | Thursday 19 February 2026 06:51:44 +0000 (0:00:01.073) 1:08:30.440 ***** 2026-02-19 06:51:45.550850 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:45.550859 | orchestrator | 2026-02-19 06:51:45.550867 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-19 06:51:45.550875 | orchestrator | Thursday 19 February 2026 06:51:45 +0000 (0:00:01.120) 1:08:31.561 ***** 2026-02-19 06:51:45.550885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:51:45.550909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'uuids': ['ca7295e3-b0e7-43de-a68b-3daf29557592'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2']}})  2026-02-19 06:51:45.550920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '74afed04', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:51:45.550938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210']}})  2026-02-19 06:51:46.667815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:51:46.667924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:51:46.667941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-19 06:51:46.667979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:51:46.668004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM', 'dm-uuid-CRYPT-LUKS2-0386b2e9039d452a9d925bb7d9e8a516-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:51:46.668015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:51:46.668027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'uuids': ['0386b2e9-039d-452a-9d92-5bb7d9e8a516'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM']}})  2026-02-19 06:51:46.668057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456']}})  2026-02-19 06:51:46.668068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:51:46.668087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b283ac38', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-19 06:51:46.668125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:51:46.668148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-19 06:51:46.668176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2', 'dm-uuid-CRYPT-LUKS2-ca7295e3b0e743dea68b3daf29557592-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-19 06:51:46.884767 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:46.884853 | orchestrator | 2026-02-19 06:51:46.884864 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-19 06:51:46.884874 | orchestrator | Thursday 19 February 2026 06:51:46 +0000 (0:00:01.326) 1:08:32.888 ***** 2026-02-19 06:51:46.884884 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.884916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456', 'dm-uuid-LVM-gHzkzoT6x1EhckfA8WsFQCGWNshTerqrXG1Ajk5mh4ejOwZYq1z2HQZKbcxUaUg2'], 'uuids': ['ca7295e3-b0e7-43de-a68b-3daf29557592'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.884938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b', 'scsi-SQEMU_QEMU_HARDDISK_74afed04-a71e-4a02-a193-e459fbff666b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '74afed04', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.884948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6O260y-bve9-uiSU-QHAy-uS14-SBn4-tvFUE4', 'scsi-0QEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42', 'scsi-SQEMU_QEMU_HARDDISK_eb0041fe-9a39-4a97-a19c-5bfadd191a42'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.884971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.884980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.884994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-19-02-28-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.885006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.885014 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM', 'dm-uuid-CRYPT-LUKS2-0386b2e9039d452a9d925bb7d9e8a516-7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.885022 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:46.885036 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98b2861f--503b--5d91--adc9--6468e68ac210-osd--block--98b2861f--503b--5d91--adc9--6468e68ac210', 'dm-uuid-LVM-UIbdS0VVHImCuypuIpNFpiSdvep5TRFy7pgtKei4H9zcQ1O9SOgtegap7Wmtw1fM'], 'uuids': ['0386b2e9-039d-452a-9d92-5bb7d9e8a516'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'eb0041fe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7pgtKe-i4H9-zcQ1-O9SO-gteg-ap7W-mtw1fM']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:59.295953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-82yKcB-Ey0W-COBu-ydNY-Ko6v-AgZ3-OegvdJ', 'scsi-0QEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46', 'scsi-SQEMU_QEMU_HARDDISK_4779b863-88a8-4699-869f-263c4bc04c46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4779b863', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3bb39c06--9317--5e70--9108--eeec2efc4456-osd--block--3bb39c06--9317--5e70--9108--eeec2efc4456']}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:59.296098 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:59.296131 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b283ac38', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1', 'scsi-SQEMU_QEMU_HARDDISK_b283ac38-22f6-4db4-ae2a-791f04f43aaf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:59.296189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:59.296212 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:59.296229 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2', 'dm-uuid-CRYPT-LUKS2-ca7295e3b0e743dea68b3daf29557592-XG1Ajk-5mh4-ejOw-ZYq1-z2HQ-ZKbc-xUaUg2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-19 06:51:59.296242 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:59.296254 | orchestrator | 2026-02-19 06:51:59.296266 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-19 06:51:59.296278 | orchestrator | Thursday 19 February 2026 06:51:48 +0000 (0:00:01.406) 1:08:34.294 ***** 2026-02-19 06:51:59.296289 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:59.296300 | orchestrator | 2026-02-19 06:51:59.296311 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-19 06:51:59.296321 | orchestrator | Thursday 19 February 2026 06:51:49 +0000 (0:00:01.486) 1:08:35.780 ***** 2026-02-19 06:51:59.296331 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:59.296375 | orchestrator | 2026-02-19 06:51:59.296386 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:51:59.296397 | orchestrator | Thursday 19 February 2026 06:51:50 +0000 (0:00:01.117) 1:08:36.898 ***** 2026-02-19 06:51:59.296408 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:51:59.296418 | orchestrator | 2026-02-19 06:51:59.296429 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:51:59.296439 | orchestrator | Thursday 19 February 2026 06:51:52 +0000 (0:00:01.404) 1:08:38.303 ***** 2026-02-19 06:51:59.296450 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:59.296460 | orchestrator | 2026-02-19 06:51:59.296470 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-19 06:51:59.296481 | orchestrator | Thursday 19 February 2026 06:51:53 +0000 (0:00:01.061) 1:08:39.365 ***** 2026-02-19 06:51:59.296492 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:59.296502 | orchestrator | 2026-02-19 06:51:59.296513 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-19 06:51:59.296524 | orchestrator | Thursday 19 February 2026 06:51:54 +0000 (0:00:01.213) 1:08:40.578 ***** 2026-02-19 06:51:59.296534 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:59.296545 | orchestrator | 2026-02-19 06:51:59.296556 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-19 06:51:59.296566 | orchestrator | Thursday 19 February 2026 06:51:55 +0000 (0:00:01.135) 1:08:41.713 ***** 2026-02-19 06:51:59.296577 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-19 06:51:59.296588 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-19 06:51:59.296600 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-19 06:51:59.296619 | orchestrator | 2026-02-19 06:51:59.296630 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-19 06:51:59.296641 | orchestrator | Thursday 19 February 2026 06:51:57 +0000 (0:00:01.562) 1:08:43.276 ***** 2026-02-19 06:51:59.296651 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-19 06:51:59.296663 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-19 06:51:59.296673 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-19 06:51:59.296684 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:51:59.296696 | orchestrator | 2026-02-19 06:51:59.296706 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-19 06:51:59.296717 | orchestrator | Thursday 19 February 2026 06:51:58 +0000 (0:00:01.111) 1:08:44.387 ***** 2026-02-19 06:51:59.296728 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-19 06:51:59.296740 | orchestrator | 2026-02-19 06:51:59.296757 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:52:40.357163 | orchestrator | Thursday 19 February 2026 06:51:59 +0000 (0:00:01.124) 1:08:45.512 ***** 2026-02-19 06:52:40.357318 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.357337 | orchestrator | 2026-02-19 06:52:40.357349 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:52:40.357360 | orchestrator | Thursday 19 February 2026 06:52:00 +0000 (0:00:01.127) 1:08:46.640 ***** 2026-02-19 06:52:40.357370 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.357380 | orchestrator | 2026-02-19 06:52:40.357390 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:52:40.357400 | orchestrator | Thursday 19 February 2026 06:52:01 +0000 (0:00:01.121) 1:08:47.761 ***** 2026-02-19 06:52:40.357410 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.357420 | orchestrator | 2026-02-19 06:52:40.357430 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:52:40.357440 | orchestrator | Thursday 19 February 2026 06:52:02 +0000 (0:00:01.120) 1:08:48.881 ***** 2026-02-19 06:52:40.357449 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.357460 | orchestrator | 2026-02-19 06:52:40.357470 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:52:40.357480 | orchestrator | Thursday 19 February 2026 06:52:03 +0000 (0:00:01.202) 1:08:50.084 ***** 2026-02-19 06:52:40.357490 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:52:40.357500 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:52:40.357510 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:52:40.357519 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.357529 | orchestrator | 2026-02-19 06:52:40.357539 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:52:40.357548 | orchestrator | Thursday 19 February 2026 06:52:05 +0000 (0:00:01.370) 1:08:51.454 ***** 2026-02-19 06:52:40.357558 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:52:40.357568 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:52:40.357577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:52:40.357587 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.357597 | orchestrator | 2026-02-19 06:52:40.357623 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:52:40.357633 | orchestrator | Thursday 19 February 2026 06:52:06 +0000 (0:00:01.379) 1:08:52.834 ***** 2026-02-19 06:52:40.357643 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:52:40.357653 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:52:40.357667 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:52:40.357678 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.357708 | orchestrator | 2026-02-19 06:52:40.357720 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:52:40.357732 | orchestrator | Thursday 19 February 2026 06:52:07 +0000 (0:00:01.375) 1:08:54.209 ***** 2026-02-19 06:52:40.357742 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.357753 | orchestrator | 2026-02-19 06:52:40.357764 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:52:40.357776 | orchestrator | Thursday 19 February 2026 06:52:09 +0000 (0:00:01.119) 1:08:55.329 ***** 2026-02-19 06:52:40.357787 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 06:52:40.357800 | orchestrator | 2026-02-19 06:52:40.357815 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-19 06:52:40.357832 | orchestrator | Thursday 19 February 2026 06:52:10 +0000 (0:00:01.345) 1:08:56.675 ***** 2026-02-19 06:52:40.357849 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:52:40.357871 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:52:40.357895 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:52:40.357911 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:52:40.357927 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:52:40.357942 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-19 06:52:40.357958 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:52:40.357974 | orchestrator | 2026-02-19 06:52:40.357990 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-19 06:52:40.358005 | orchestrator | Thursday 19 February 2026 06:52:12 +0000 (0:00:02.204) 1:08:58.879 ***** 2026-02-19 06:52:40.358095 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-19 06:52:40.358113 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-19 06:52:40.358129 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-19 06:52:40.358146 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-19 06:52:40.358163 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-19 06:52:40.358178 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-19 06:52:40.358188 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-19 06:52:40.358198 | orchestrator | 2026-02-19 06:52:40.358207 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-19 06:52:40.358217 | orchestrator | Thursday 19 February 2026 06:52:14 +0000 (0:00:02.257) 1:09:01.137 ***** 2026-02-19 06:52:40.358227 | orchestrator | changed: [testbed-node-5] 2026-02-19 06:52:40.358237 | orchestrator | 2026-02-19 06:52:40.358267 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-19 06:52:40.358315 | orchestrator | Thursday 19 February 2026 06:52:16 +0000 (0:00:01.953) 1:09:03.091 ***** 2026-02-19 06:52:40.358326 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:52:40.358337 | orchestrator | 2026-02-19 06:52:40.358347 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-19 06:52:40.358356 | orchestrator | Thursday 19 February 2026 06:52:19 +0000 (0:00:02.546) 1:09:05.638 ***** 2026-02-19 06:52:40.358366 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:52:40.358375 | orchestrator | 2026-02-19 06:52:40.358385 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:52:40.358395 | orchestrator | Thursday 19 February 2026 06:52:21 +0000 (0:00:01.950) 1:09:07.588 ***** 2026-02-19 06:52:40.358416 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-19 06:52:40.358426 | orchestrator | 2026-02-19 06:52:40.358435 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:52:40.358445 | orchestrator | Thursday 19 February 2026 06:52:22 +0000 (0:00:01.116) 1:09:08.705 ***** 2026-02-19 06:52:40.358454 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-19 06:52:40.358464 | orchestrator | 2026-02-19 06:52:40.358473 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:52:40.358483 | orchestrator | Thursday 19 February 2026 06:52:23 +0000 (0:00:01.121) 1:09:09.826 ***** 2026-02-19 06:52:40.358492 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.358502 | orchestrator | 2026-02-19 06:52:40.358512 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:52:40.358521 | orchestrator | Thursday 19 February 2026 06:52:24 +0000 (0:00:01.105) 1:09:10.931 ***** 2026-02-19 06:52:40.358531 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.358540 | orchestrator | 2026-02-19 06:52:40.358550 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:52:40.358567 | orchestrator | Thursday 19 February 2026 06:52:26 +0000 (0:00:01.538) 1:09:12.469 ***** 2026-02-19 06:52:40.358578 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.358587 | orchestrator | 2026-02-19 06:52:40.358597 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:52:40.358607 | orchestrator | Thursday 19 February 2026 06:52:27 +0000 (0:00:01.503) 1:09:13.973 ***** 2026-02-19 06:52:40.358616 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.358626 | orchestrator | 2026-02-19 06:52:40.358635 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:52:40.358645 | orchestrator | Thursday 19 February 2026 06:52:29 +0000 (0:00:01.589) 1:09:15.563 ***** 2026-02-19 06:52:40.358654 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.358664 | orchestrator | 2026-02-19 06:52:40.358674 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:52:40.358683 | orchestrator | Thursday 19 February 2026 06:52:30 +0000 (0:00:01.097) 1:09:16.660 ***** 2026-02-19 06:52:40.358693 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.358702 | orchestrator | 2026-02-19 06:52:40.358712 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:52:40.358721 | orchestrator | Thursday 19 February 2026 06:52:31 +0000 (0:00:01.108) 1:09:17.769 ***** 2026-02-19 06:52:40.358731 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.358740 | orchestrator | 2026-02-19 06:52:40.358750 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:52:40.358760 | orchestrator | Thursday 19 February 2026 06:52:32 +0000 (0:00:01.096) 1:09:18.865 ***** 2026-02-19 06:52:40.358769 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.358779 | orchestrator | 2026-02-19 06:52:40.358788 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:52:40.358797 | orchestrator | Thursday 19 February 2026 06:52:34 +0000 (0:00:01.544) 1:09:20.410 ***** 2026-02-19 06:52:40.358807 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.358817 | orchestrator | 2026-02-19 06:52:40.358827 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:52:40.358836 | orchestrator | Thursday 19 February 2026 06:52:35 +0000 (0:00:01.516) 1:09:21.926 ***** 2026-02-19 06:52:40.358845 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.358855 | orchestrator | 2026-02-19 06:52:40.358865 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:52:40.358874 | orchestrator | Thursday 19 February 2026 06:52:36 +0000 (0:00:00.783) 1:09:22.709 ***** 2026-02-19 06:52:40.358883 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.358893 | orchestrator | 2026-02-19 06:52:40.358903 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:52:40.358917 | orchestrator | Thursday 19 February 2026 06:52:37 +0000 (0:00:00.759) 1:09:23.469 ***** 2026-02-19 06:52:40.358927 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.358937 | orchestrator | 2026-02-19 06:52:40.358946 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:52:40.358956 | orchestrator | Thursday 19 February 2026 06:52:38 +0000 (0:00:00.796) 1:09:24.265 ***** 2026-02-19 06:52:40.358965 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.358975 | orchestrator | 2026-02-19 06:52:40.358985 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:52:40.358994 | orchestrator | Thursday 19 February 2026 06:52:38 +0000 (0:00:00.781) 1:09:25.047 ***** 2026-02-19 06:52:40.359003 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:52:40.359013 | orchestrator | 2026-02-19 06:52:40.359023 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:52:40.359038 | orchestrator | Thursday 19 February 2026 06:52:39 +0000 (0:00:00.766) 1:09:25.813 ***** 2026-02-19 06:52:40.359053 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:52:40.359066 | orchestrator | 2026-02-19 06:52:40.359086 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:53:20.596598 | orchestrator | Thursday 19 February 2026 06:52:40 +0000 (0:00:00.760) 1:09:26.573 ***** 2026-02-19 06:53:20.596687 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.596696 | orchestrator | 2026-02-19 06:53:20.596703 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:53:20.596708 | orchestrator | Thursday 19 February 2026 06:52:41 +0000 (0:00:00.786) 1:09:27.360 ***** 2026-02-19 06:53:20.596715 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.596720 | orchestrator | 2026-02-19 06:53:20.596726 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:53:20.596731 | orchestrator | Thursday 19 February 2026 06:52:41 +0000 (0:00:00.767) 1:09:28.127 ***** 2026-02-19 06:53:20.596736 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:53:20.596741 | orchestrator | 2026-02-19 06:53:20.596746 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:53:20.596751 | orchestrator | Thursday 19 February 2026 06:52:42 +0000 (0:00:00.770) 1:09:28.898 ***** 2026-02-19 06:53:20.596756 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:53:20.596761 | orchestrator | 2026-02-19 06:53:20.596766 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-19 06:53:20.596771 | orchestrator | Thursday 19 February 2026 06:52:43 +0000 (0:00:00.868) 1:09:29.766 ***** 2026-02-19 06:53:20.596775 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.596780 | orchestrator | 2026-02-19 06:53:20.596785 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-19 06:53:20.596790 | orchestrator | Thursday 19 February 2026 06:52:44 +0000 (0:00:00.758) 1:09:30.525 ***** 2026-02-19 06:53:20.596796 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.596804 | orchestrator | 2026-02-19 06:53:20.596814 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-19 06:53:20.596826 | orchestrator | Thursday 19 February 2026 06:52:45 +0000 (0:00:00.746) 1:09:31.272 ***** 2026-02-19 06:53:20.596833 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.596841 | orchestrator | 2026-02-19 06:53:20.596849 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-19 06:53:20.596856 | orchestrator | Thursday 19 February 2026 06:52:45 +0000 (0:00:00.799) 1:09:32.071 ***** 2026-02-19 06:53:20.596864 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.596871 | orchestrator | 2026-02-19 06:53:20.596894 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-19 06:53:20.596902 | orchestrator | Thursday 19 February 2026 06:52:46 +0000 (0:00:00.780) 1:09:32.852 ***** 2026-02-19 06:53:20.596909 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.596917 | orchestrator | 2026-02-19 06:53:20.596925 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-19 06:53:20.596953 | orchestrator | Thursday 19 February 2026 06:52:47 +0000 (0:00:00.759) 1:09:33.612 ***** 2026-02-19 06:53:20.596962 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.596970 | orchestrator | 2026-02-19 06:53:20.596978 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-19 06:53:20.596985 | orchestrator | Thursday 19 February 2026 06:52:48 +0000 (0:00:00.761) 1:09:34.373 ***** 2026-02-19 06:53:20.596992 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597001 | orchestrator | 2026-02-19 06:53:20.597009 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-19 06:53:20.597019 | orchestrator | Thursday 19 February 2026 06:52:48 +0000 (0:00:00.760) 1:09:35.133 ***** 2026-02-19 06:53:20.597024 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597029 | orchestrator | 2026-02-19 06:53:20.597034 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-19 06:53:20.597038 | orchestrator | Thursday 19 February 2026 06:52:49 +0000 (0:00:00.784) 1:09:35.918 ***** 2026-02-19 06:53:20.597043 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597048 | orchestrator | 2026-02-19 06:53:20.597053 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-19 06:53:20.597058 | orchestrator | Thursday 19 February 2026 06:52:50 +0000 (0:00:00.782) 1:09:36.700 ***** 2026-02-19 06:53:20.597063 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597067 | orchestrator | 2026-02-19 06:53:20.597072 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-19 06:53:20.597077 | orchestrator | Thursday 19 February 2026 06:52:51 +0000 (0:00:00.769) 1:09:37.470 ***** 2026-02-19 06:53:20.597082 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597087 | orchestrator | 2026-02-19 06:53:20.597091 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-19 06:53:20.597096 | orchestrator | Thursday 19 February 2026 06:52:52 +0000 (0:00:00.776) 1:09:38.247 ***** 2026-02-19 06:53:20.597101 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597106 | orchestrator | 2026-02-19 06:53:20.597110 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-19 06:53:20.597115 | orchestrator | Thursday 19 February 2026 06:52:52 +0000 (0:00:00.855) 1:09:39.103 ***** 2026-02-19 06:53:20.597120 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:53:20.597125 | orchestrator | 2026-02-19 06:53:20.597129 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-19 06:53:20.597134 | orchestrator | Thursday 19 February 2026 06:52:54 +0000 (0:00:01.586) 1:09:40.689 ***** 2026-02-19 06:53:20.597139 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:53:20.597144 | orchestrator | 2026-02-19 06:53:20.597150 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-19 06:53:20.597155 | orchestrator | Thursday 19 February 2026 06:52:56 +0000 (0:00:01.981) 1:09:42.671 ***** 2026-02-19 06:53:20.597161 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-19 06:53:20.597167 | orchestrator | 2026-02-19 06:53:20.597173 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-19 06:53:20.597178 | orchestrator | Thursday 19 February 2026 06:52:57 +0000 (0:00:01.102) 1:09:43.773 ***** 2026-02-19 06:53:20.597183 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597189 | orchestrator | 2026-02-19 06:53:20.597194 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-19 06:53:20.597239 | orchestrator | Thursday 19 February 2026 06:52:58 +0000 (0:00:01.148) 1:09:44.922 ***** 2026-02-19 06:53:20.597246 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597251 | orchestrator | 2026-02-19 06:53:20.597257 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-19 06:53:20.597262 | orchestrator | Thursday 19 February 2026 06:52:59 +0000 (0:00:01.148) 1:09:46.070 ***** 2026-02-19 06:53:20.597267 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-19 06:53:20.597279 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-19 06:53:20.597285 | orchestrator | 2026-02-19 06:53:20.597290 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-19 06:53:20.597296 | orchestrator | Thursday 19 February 2026 06:53:01 +0000 (0:00:01.866) 1:09:47.936 ***** 2026-02-19 06:53:20.597301 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:53:20.597306 | orchestrator | 2026-02-19 06:53:20.597311 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-19 06:53:20.597317 | orchestrator | Thursday 19 February 2026 06:53:03 +0000 (0:00:01.427) 1:09:49.364 ***** 2026-02-19 06:53:20.597323 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597328 | orchestrator | 2026-02-19 06:53:20.597333 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-19 06:53:20.597338 | orchestrator | Thursday 19 February 2026 06:53:04 +0000 (0:00:01.112) 1:09:50.476 ***** 2026-02-19 06:53:20.597344 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597349 | orchestrator | 2026-02-19 06:53:20.597354 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-19 06:53:20.597360 | orchestrator | Thursday 19 February 2026 06:53:05 +0000 (0:00:00.835) 1:09:51.312 ***** 2026-02-19 06:53:20.597365 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597371 | orchestrator | 2026-02-19 06:53:20.597376 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-19 06:53:20.597381 | orchestrator | Thursday 19 February 2026 06:53:05 +0000 (0:00:00.833) 1:09:52.145 ***** 2026-02-19 06:53:20.597387 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-19 06:53:20.597392 | orchestrator | 2026-02-19 06:53:20.597402 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-19 06:53:20.597407 | orchestrator | Thursday 19 February 2026 06:53:07 +0000 (0:00:01.247) 1:09:53.393 ***** 2026-02-19 06:53:20.597412 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:53:20.597418 | orchestrator | 2026-02-19 06:53:20.597423 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-19 06:53:20.597429 | orchestrator | Thursday 19 February 2026 06:53:08 +0000 (0:00:01.732) 1:09:55.125 ***** 2026-02-19 06:53:20.597434 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-19 06:53:20.597439 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-19 06:53:20.597445 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-19 06:53:20.597450 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597455 | orchestrator | 2026-02-19 06:53:20.597461 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-19 06:53:20.597466 | orchestrator | Thursday 19 February 2026 06:53:10 +0000 (0:00:01.132) 1:09:56.258 ***** 2026-02-19 06:53:20.597472 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597478 | orchestrator | 2026-02-19 06:53:20.597483 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-19 06:53:20.597488 | orchestrator | Thursday 19 February 2026 06:53:11 +0000 (0:00:01.190) 1:09:57.448 ***** 2026-02-19 06:53:20.597493 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597499 | orchestrator | 2026-02-19 06:53:20.597504 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-19 06:53:20.597510 | orchestrator | Thursday 19 February 2026 06:53:12 +0000 (0:00:01.144) 1:09:58.593 ***** 2026-02-19 06:53:20.597518 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597525 | orchestrator | 2026-02-19 06:53:20.597531 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-19 06:53:20.597535 | orchestrator | Thursday 19 February 2026 06:53:13 +0000 (0:00:01.128) 1:09:59.721 ***** 2026-02-19 06:53:20.597540 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597545 | orchestrator | 2026-02-19 06:53:20.597553 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-19 06:53:20.597558 | orchestrator | Thursday 19 February 2026 06:53:14 +0000 (0:00:01.126) 1:10:00.848 ***** 2026-02-19 06:53:20.597563 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597568 | orchestrator | 2026-02-19 06:53:20.597572 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-19 06:53:20.597577 | orchestrator | Thursday 19 February 2026 06:53:15 +0000 (0:00:00.782) 1:10:01.630 ***** 2026-02-19 06:53:20.597582 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:53:20.597586 | orchestrator | 2026-02-19 06:53:20.597591 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-19 06:53:20.597596 | orchestrator | Thursday 19 February 2026 06:53:17 +0000 (0:00:02.157) 1:10:03.788 ***** 2026-02-19 06:53:20.597601 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:53:20.597606 | orchestrator | 2026-02-19 06:53:20.597610 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-19 06:53:20.597615 | orchestrator | Thursday 19 February 2026 06:53:18 +0000 (0:00:00.786) 1:10:04.575 ***** 2026-02-19 06:53:20.597620 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-19 06:53:20.597624 | orchestrator | 2026-02-19 06:53:20.597629 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-19 06:53:20.597634 | orchestrator | Thursday 19 February 2026 06:53:19 +0000 (0:00:01.096) 1:10:05.672 ***** 2026-02-19 06:53:20.597641 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:53:20.597649 | orchestrator | 2026-02-19 06:53:20.597657 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-19 06:53:20.597670 | orchestrator | Thursday 19 February 2026 06:53:20 +0000 (0:00:01.140) 1:10:06.812 ***** 2026-02-19 06:54:01.759746 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.759895 | orchestrator | 2026-02-19 06:54:01.759919 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-19 06:54:01.759932 | orchestrator | Thursday 19 February 2026 06:53:21 +0000 (0:00:01.134) 1:10:07.946 ***** 2026-02-19 06:54:01.759944 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.759955 | orchestrator | 2026-02-19 06:54:01.759967 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-19 06:54:01.759978 | orchestrator | Thursday 19 February 2026 06:53:22 +0000 (0:00:01.169) 1:10:09.117 ***** 2026-02-19 06:54:01.759989 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.760008 | orchestrator | 2026-02-19 06:54:01.760026 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-19 06:54:01.760046 | orchestrator | Thursday 19 February 2026 06:53:24 +0000 (0:00:01.149) 1:10:10.266 ***** 2026-02-19 06:54:01.760065 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.760081 | orchestrator | 2026-02-19 06:54:01.760092 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-19 06:54:01.760103 | orchestrator | Thursday 19 February 2026 06:53:25 +0000 (0:00:01.125) 1:10:11.391 ***** 2026-02-19 06:54:01.760115 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.760127 | orchestrator | 2026-02-19 06:54:01.760138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-19 06:54:01.760149 | orchestrator | Thursday 19 February 2026 06:53:26 +0000 (0:00:01.144) 1:10:12.535 ***** 2026-02-19 06:54:01.760192 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.760213 | orchestrator | 2026-02-19 06:54:01.760233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-19 06:54:01.760252 | orchestrator | Thursday 19 February 2026 06:53:27 +0000 (0:00:01.111) 1:10:13.647 ***** 2026-02-19 06:54:01.760264 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.760275 | orchestrator | 2026-02-19 06:54:01.760289 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-19 06:54:01.760309 | orchestrator | Thursday 19 February 2026 06:53:28 +0000 (0:00:01.146) 1:10:14.794 ***** 2026-02-19 06:54:01.760366 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:54:01.760388 | orchestrator | 2026-02-19 06:54:01.760415 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-19 06:54:01.760426 | orchestrator | Thursday 19 February 2026 06:53:29 +0000 (0:00:00.794) 1:10:15.589 ***** 2026-02-19 06:54:01.760438 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-19 06:54:01.760450 | orchestrator | 2026-02-19 06:54:01.760461 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-19 06:54:01.760472 | orchestrator | Thursday 19 February 2026 06:53:30 +0000 (0:00:01.148) 1:10:16.737 ***** 2026-02-19 06:54:01.760483 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-19 06:54:01.760495 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-19 06:54:01.760506 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-19 06:54:01.760517 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-19 06:54:01.760528 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-19 06:54:01.760538 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-19 06:54:01.760549 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-19 06:54:01.760560 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-19 06:54:01.760571 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-19 06:54:01.760582 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-19 06:54:01.760593 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-19 06:54:01.760604 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-19 06:54:01.760615 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-19 06:54:01.760626 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-19 06:54:01.760637 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-19 06:54:01.760648 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-19 06:54:01.760659 | orchestrator | 2026-02-19 06:54:01.760670 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-19 06:54:01.760681 | orchestrator | Thursday 19 February 2026 06:53:36 +0000 (0:00:06.327) 1:10:23.064 ***** 2026-02-19 06:54:01.760692 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-19 06:54:01.760703 | orchestrator | 2026-02-19 06:54:01.760714 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-19 06:54:01.760725 | orchestrator | Thursday 19 February 2026 06:53:37 +0000 (0:00:01.117) 1:10:24.181 ***** 2026-02-19 06:54:01.760737 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:54:01.760749 | orchestrator | 2026-02-19 06:54:01.760760 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-19 06:54:01.760771 | orchestrator | Thursday 19 February 2026 06:53:39 +0000 (0:00:01.803) 1:10:25.985 ***** 2026-02-19 06:54:01.760782 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:54:01.760793 | orchestrator | 2026-02-19 06:54:01.760805 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-19 06:54:01.760815 | orchestrator | Thursday 19 February 2026 06:53:41 +0000 (0:00:01.693) 1:10:27.679 ***** 2026-02-19 06:54:01.760826 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.760838 | orchestrator | 2026-02-19 06:54:01.760849 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-19 06:54:01.760880 | orchestrator | Thursday 19 February 2026 06:53:42 +0000 (0:00:00.796) 1:10:28.475 ***** 2026-02-19 06:54:01.760892 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.760903 | orchestrator | 2026-02-19 06:54:01.760914 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-19 06:54:01.760935 | orchestrator | Thursday 19 February 2026 06:53:43 +0000 (0:00:00.765) 1:10:29.240 ***** 2026-02-19 06:54:01.760946 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.760957 | orchestrator | 2026-02-19 06:54:01.760968 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-19 06:54:01.760979 | orchestrator | Thursday 19 February 2026 06:53:43 +0000 (0:00:00.778) 1:10:30.019 ***** 2026-02-19 06:54:01.760990 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761001 | orchestrator | 2026-02-19 06:54:01.761012 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-19 06:54:01.761023 | orchestrator | Thursday 19 February 2026 06:53:44 +0000 (0:00:00.765) 1:10:30.784 ***** 2026-02-19 06:54:01.761033 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761044 | orchestrator | 2026-02-19 06:54:01.761055 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-19 06:54:01.761066 | orchestrator | Thursday 19 February 2026 06:53:45 +0000 (0:00:00.755) 1:10:31.540 ***** 2026-02-19 06:54:01.761077 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761088 | orchestrator | 2026-02-19 06:54:01.761099 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-19 06:54:01.761110 | orchestrator | Thursday 19 February 2026 06:53:46 +0000 (0:00:00.755) 1:10:32.295 ***** 2026-02-19 06:54:01.761121 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761132 | orchestrator | 2026-02-19 06:54:01.761143 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-19 06:54:01.761178 | orchestrator | Thursday 19 February 2026 06:53:46 +0000 (0:00:00.639) 1:10:32.935 ***** 2026-02-19 06:54:01.761190 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761201 | orchestrator | 2026-02-19 06:54:01.761212 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-19 06:54:01.761229 | orchestrator | Thursday 19 February 2026 06:53:47 +0000 (0:00:00.628) 1:10:33.564 ***** 2026-02-19 06:54:01.761240 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761251 | orchestrator | 2026-02-19 06:54:01.761262 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-19 06:54:01.761273 | orchestrator | Thursday 19 February 2026 06:53:48 +0000 (0:00:00.759) 1:10:34.323 ***** 2026-02-19 06:54:01.761284 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761295 | orchestrator | 2026-02-19 06:54:01.761306 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-19 06:54:01.761317 | orchestrator | Thursday 19 February 2026 06:53:48 +0000 (0:00:00.733) 1:10:35.057 ***** 2026-02-19 06:54:01.761328 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761339 | orchestrator | 2026-02-19 06:54:01.761350 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-19 06:54:01.761361 | orchestrator | Thursday 19 February 2026 06:53:49 +0000 (0:00:00.766) 1:10:35.823 ***** 2026-02-19 06:54:01.761372 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-19 06:54:01.761383 | orchestrator | 2026-02-19 06:54:01.761394 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-19 06:54:01.761405 | orchestrator | Thursday 19 February 2026 06:53:53 +0000 (0:00:04.379) 1:10:40.203 ***** 2026-02-19 06:54:01.761417 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:54:01.761438 | orchestrator | 2026-02-19 06:54:01.761459 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-19 06:54:01.761479 | orchestrator | Thursday 19 February 2026 06:53:54 +0000 (0:00:00.825) 1:10:41.028 ***** 2026-02-19 06:54:01.761502 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-19 06:54:01.761541 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-19 06:54:01.761564 | orchestrator | 2026-02-19 06:54:01.761585 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-19 06:54:01.761603 | orchestrator | Thursday 19 February 2026 06:53:59 +0000 (0:00:04.554) 1:10:45.583 ***** 2026-02-19 06:54:01.761614 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761625 | orchestrator | 2026-02-19 06:54:01.761636 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-19 06:54:01.761647 | orchestrator | Thursday 19 February 2026 06:54:00 +0000 (0:00:00.798) 1:10:46.381 ***** 2026-02-19 06:54:01.761658 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761673 | orchestrator | 2026-02-19 06:54:01.761692 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-19 06:54:01.761712 | orchestrator | Thursday 19 February 2026 06:54:00 +0000 (0:00:00.795) 1:10:47.177 ***** 2026-02-19 06:54:01.761731 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:54:01.761742 | orchestrator | 2026-02-19 06:54:01.761754 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-19 06:54:01.761785 | orchestrator | Thursday 19 February 2026 06:54:01 +0000 (0:00:00.798) 1:10:47.976 ***** 2026-02-19 06:55:11.548039 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.548198 | orchestrator | 2026-02-19 06:55:11.548211 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-19 06:55:11.548219 | orchestrator | Thursday 19 February 2026 06:54:02 +0000 (0:00:00.799) 1:10:48.775 ***** 2026-02-19 06:55:11.548226 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.548232 | orchestrator | 2026-02-19 06:55:11.548239 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-19 06:55:11.548246 | orchestrator | Thursday 19 February 2026 06:54:03 +0000 (0:00:00.784) 1:10:49.559 ***** 2026-02-19 06:55:11.548252 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:55:11.548259 | orchestrator | 2026-02-19 06:55:11.548265 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-19 06:55:11.548272 | orchestrator | Thursday 19 February 2026 06:54:04 +0000 (0:00:00.876) 1:10:50.436 ***** 2026-02-19 06:55:11.548278 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:55:11.548284 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:55:11.548290 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:55:11.548296 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.548303 | orchestrator | 2026-02-19 06:55:11.548309 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-19 06:55:11.548316 | orchestrator | Thursday 19 February 2026 06:54:05 +0000 (0:00:01.020) 1:10:51.457 ***** 2026-02-19 06:55:11.548322 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:55:11.548328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:55:11.548334 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:55:11.548340 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.548346 | orchestrator | 2026-02-19 06:55:11.548352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-19 06:55:11.548358 | orchestrator | Thursday 19 February 2026 06:54:06 +0000 (0:00:01.089) 1:10:52.546 ***** 2026-02-19 06:55:11.548376 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-19 06:55:11.548383 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-19 06:55:11.548406 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-19 06:55:11.548413 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.548419 | orchestrator | 2026-02-19 06:55:11.548425 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-19 06:55:11.548431 | orchestrator | Thursday 19 February 2026 06:54:07 +0000 (0:00:01.377) 1:10:53.924 ***** 2026-02-19 06:55:11.548438 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:55:11.548444 | orchestrator | 2026-02-19 06:55:11.548450 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-19 06:55:11.548456 | orchestrator | Thursday 19 February 2026 06:54:08 +0000 (0:00:00.790) 1:10:54.714 ***** 2026-02-19 06:55:11.548462 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-19 06:55:11.548468 | orchestrator | 2026-02-19 06:55:11.548475 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-19 06:55:11.548481 | orchestrator | Thursday 19 February 2026 06:54:09 +0000 (0:00:01.422) 1:10:56.136 ***** 2026-02-19 06:55:11.548487 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:55:11.548493 | orchestrator | 2026-02-19 06:55:11.548499 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-19 06:55:11.548505 | orchestrator | Thursday 19 February 2026 06:54:11 +0000 (0:00:01.426) 1:10:57.562 ***** 2026-02-19 06:55:11.548511 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-19 06:55:11.548517 | orchestrator | 2026-02-19 06:55:11.548523 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-19 06:55:11.548540 | orchestrator | Thursday 19 February 2026 06:54:12 +0000 (0:00:01.096) 1:10:58.659 ***** 2026-02-19 06:55:11.548546 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:55:11.548552 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 06:55:11.548559 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:55:11.548565 | orchestrator | 2026-02-19 06:55:11.548571 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:55:11.548577 | orchestrator | Thursday 19 February 2026 06:54:15 +0000 (0:00:03.340) 1:11:02.000 ***** 2026-02-19 06:55:11.548583 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-19 06:55:11.548589 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-19 06:55:11.548595 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:55:11.548602 | orchestrator | 2026-02-19 06:55:11.548608 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-19 06:55:11.548614 | orchestrator | Thursday 19 February 2026 06:54:17 +0000 (0:00:01.968) 1:11:03.969 ***** 2026-02-19 06:55:11.548620 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.548626 | orchestrator | 2026-02-19 06:55:11.548632 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-19 06:55:11.548638 | orchestrator | Thursday 19 February 2026 06:54:18 +0000 (0:00:00.749) 1:11:04.719 ***** 2026-02-19 06:55:11.548645 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-19 06:55:11.548651 | orchestrator | 2026-02-19 06:55:11.548658 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-19 06:55:11.548664 | orchestrator | Thursday 19 February 2026 06:54:19 +0000 (0:00:01.109) 1:11:05.828 ***** 2026-02-19 06:55:11.548671 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:55:11.548679 | orchestrator | 2026-02-19 06:55:11.548685 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-19 06:55:11.548691 | orchestrator | Thursday 19 February 2026 06:54:21 +0000 (0:00:01.676) 1:11:07.504 ***** 2026-02-19 06:55:11.548711 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:55:11.548718 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-19 06:55:11.548731 | orchestrator | 2026-02-19 06:55:11.548737 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-19 06:55:11.548743 | orchestrator | Thursday 19 February 2026 06:54:26 +0000 (0:00:05.510) 1:11:13.015 ***** 2026-02-19 06:55:11.548750 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-19 06:55:11.548756 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-19 06:55:11.548762 | orchestrator | 2026-02-19 06:55:11.548768 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-19 06:55:11.548774 | orchestrator | Thursday 19 February 2026 06:54:30 +0000 (0:00:03.653) 1:11:16.669 ***** 2026-02-19 06:55:11.548780 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-19 06:55:11.548786 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:55:11.548792 | orchestrator | 2026-02-19 06:55:11.548798 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-19 06:55:11.548804 | orchestrator | Thursday 19 February 2026 06:54:32 +0000 (0:00:01.604) 1:11:18.273 ***** 2026-02-19 06:55:11.548811 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-19 06:55:11.548817 | orchestrator | 2026-02-19 06:55:11.548823 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-19 06:55:11.548829 | orchestrator | Thursday 19 February 2026 06:54:33 +0000 (0:00:01.151) 1:11:19.425 ***** 2026-02-19 06:55:11.548835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548871 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.548877 | orchestrator | 2026-02-19 06:55:11.548883 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-19 06:55:11.548889 | orchestrator | Thursday 19 February 2026 06:54:34 +0000 (0:00:01.604) 1:11:21.030 ***** 2026-02-19 06:55:11.548896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-19 06:55:11.548926 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.548933 | orchestrator | 2026-02-19 06:55:11.548939 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-19 06:55:11.548945 | orchestrator | Thursday 19 February 2026 06:54:36 +0000 (0:00:01.595) 1:11:22.625 ***** 2026-02-19 06:55:11.548951 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:55:11.548958 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:55:11.548969 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:55:11.548975 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:55:11.548982 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-19 06:55:11.548988 | orchestrator | 2026-02-19 06:55:11.548995 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-19 06:55:11.549001 | orchestrator | Thursday 19 February 2026 06:55:10 +0000 (0:00:34.376) 1:11:57.001 ***** 2026-02-19 06:55:11.549007 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:55:11.549013 | orchestrator | 2026-02-19 06:55:11.549019 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-19 06:55:11.549029 | orchestrator | Thursday 19 February 2026 06:55:11 +0000 (0:00:00.760) 1:11:57.762 ***** 2026-02-19 06:56:02.516851 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:02.517001 | orchestrator | 2026-02-19 06:56:02.517078 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-19 06:56:02.517092 | orchestrator | Thursday 19 February 2026 06:55:12 +0000 (0:00:00.784) 1:11:58.546 ***** 2026-02-19 06:56:02.517104 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-19 06:56:02.517116 | orchestrator | 2026-02-19 06:56:02.517128 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-19 06:56:02.517138 | orchestrator | Thursday 19 February 2026 06:55:13 +0000 (0:00:01.109) 1:11:59.656 ***** 2026-02-19 06:56:02.517150 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-19 06:56:02.517160 | orchestrator | 2026-02-19 06:56:02.517172 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-19 06:56:02.517183 | orchestrator | Thursday 19 February 2026 06:55:14 +0000 (0:00:01.138) 1:12:00.795 ***** 2026-02-19 06:56:02.517194 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.517206 | orchestrator | 2026-02-19 06:56:02.517217 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-19 06:56:02.517228 | orchestrator | Thursday 19 February 2026 06:55:16 +0000 (0:00:02.019) 1:12:02.815 ***** 2026-02-19 06:56:02.517238 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.517249 | orchestrator | 2026-02-19 06:56:02.517260 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-19 06:56:02.517271 | orchestrator | Thursday 19 February 2026 06:55:18 +0000 (0:00:01.936) 1:12:04.751 ***** 2026-02-19 06:56:02.517282 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.517293 | orchestrator | 2026-02-19 06:56:02.517303 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-19 06:56:02.517314 | orchestrator | Thursday 19 February 2026 06:55:20 +0000 (0:00:02.235) 1:12:06.986 ***** 2026-02-19 06:56:02.517343 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-19 06:56:02.517356 | orchestrator | 2026-02-19 06:56:02.517367 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-19 06:56:02.517379 | orchestrator | skipping: no hosts matched 2026-02-19 06:56:02.517392 | orchestrator | 2026-02-19 06:56:02.517405 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-19 06:56:02.517417 | orchestrator | skipping: no hosts matched 2026-02-19 06:56:02.517430 | orchestrator | 2026-02-19 06:56:02.517442 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-19 06:56:02.517456 | orchestrator | skipping: no hosts matched 2026-02-19 06:56:02.517468 | orchestrator | 2026-02-19 06:56:02.517479 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-19 06:56:02.517513 | orchestrator | 2026-02-19 06:56:02.517524 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-19 06:56:02.517535 | orchestrator | Thursday 19 February 2026 06:55:25 +0000 (0:00:04.299) 1:12:11.286 ***** 2026-02-19 06:56:02.517546 | orchestrator | changed: [testbed-node-1] 2026-02-19 06:56:02.517557 | orchestrator | changed: [testbed-node-0] 2026-02-19 06:56:02.517568 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:56:02.517578 | orchestrator | changed: [testbed-node-4] 2026-02-19 06:56:02.517589 | orchestrator | changed: [testbed-node-3] 2026-02-19 06:56:02.517614 | orchestrator | changed: [testbed-node-5] 2026-02-19 06:56:02.517636 | orchestrator | 2026-02-19 06:56:02.517647 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-19 06:56:02.517658 | orchestrator | Thursday 19 February 2026 06:55:27 +0000 (0:00:02.542) 1:12:13.829 ***** 2026-02-19 06:56:02.517669 | orchestrator | changed: [testbed-node-0] 2026-02-19 06:56:02.517679 | orchestrator | changed: [testbed-node-1] 2026-02-19 06:56:02.517690 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:56:02.517700 | orchestrator | changed: [testbed-node-3] 2026-02-19 06:56:02.517711 | orchestrator | changed: [testbed-node-4] 2026-02-19 06:56:02.517721 | orchestrator | changed: [testbed-node-5] 2026-02-19 06:56:02.517732 | orchestrator | 2026-02-19 06:56:02.517743 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:56:02.517754 | orchestrator | Thursday 19 February 2026 06:55:31 +0000 (0:00:03.445) 1:12:17.274 ***** 2026-02-19 06:56:02.517764 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:02.517775 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:02.517786 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:02.517797 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:02.517808 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:02.517818 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.517832 | orchestrator | 2026-02-19 06:56:02.517851 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:56:02.517869 | orchestrator | Thursday 19 February 2026 06:55:33 +0000 (0:00:02.352) 1:12:19.626 ***** 2026-02-19 06:56:02.517887 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:02.517904 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:02.517923 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:02.517942 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:02.517961 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:02.517979 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.517994 | orchestrator | 2026-02-19 06:56:02.518096 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-19 06:56:02.518112 | orchestrator | Thursday 19 February 2026 06:55:35 +0000 (0:00:02.177) 1:12:21.804 ***** 2026-02-19 06:56:02.518125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 06:56:02.518138 | orchestrator | 2026-02-19 06:56:02.518149 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-19 06:56:02.518160 | orchestrator | Thursday 19 February 2026 06:55:37 +0000 (0:00:01.864) 1:12:23.669 ***** 2026-02-19 06:56:02.518171 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 06:56:02.518182 | orchestrator | 2026-02-19 06:56:02.518214 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-19 06:56:02.518226 | orchestrator | Thursday 19 February 2026 06:55:39 +0000 (0:00:02.104) 1:12:25.773 ***** 2026-02-19 06:56:02.518237 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:02.518247 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:02.518258 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:02.518269 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:02.518280 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:02.518291 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:02.518314 | orchestrator | 2026-02-19 06:56:02.518325 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-19 06:56:02.518336 | orchestrator | Thursday 19 February 2026 06:55:41 +0000 (0:00:02.174) 1:12:27.948 ***** 2026-02-19 06:56:02.518347 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:02.518358 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:02.518369 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:02.518380 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:02.518391 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:02.518402 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.518412 | orchestrator | 2026-02-19 06:56:02.518423 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-19 06:56:02.518434 | orchestrator | Thursday 19 February 2026 06:55:43 +0000 (0:00:02.054) 1:12:30.003 ***** 2026-02-19 06:56:02.518444 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:02.518455 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:02.518466 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:02.518477 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:02.518488 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:02.518499 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.518509 | orchestrator | 2026-02-19 06:56:02.518520 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-19 06:56:02.518531 | orchestrator | Thursday 19 February 2026 06:55:46 +0000 (0:00:02.485) 1:12:32.488 ***** 2026-02-19 06:56:02.518542 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:02.518553 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:02.518564 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:02.518575 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:02.518593 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:02.518604 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.518615 | orchestrator | 2026-02-19 06:56:02.518626 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-19 06:56:02.518637 | orchestrator | Thursday 19 February 2026 06:55:48 +0000 (0:00:02.070) 1:12:34.559 ***** 2026-02-19 06:56:02.518647 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:02.518658 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:02.518669 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:02.518680 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:02.518690 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:02.518701 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:02.518712 | orchestrator | 2026-02-19 06:56:02.518723 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-19 06:56:02.518734 | orchestrator | Thursday 19 February 2026 06:55:50 +0000 (0:00:02.296) 1:12:36.855 ***** 2026-02-19 06:56:02.518744 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:02.518755 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:02.518766 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:02.518777 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:02.518788 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:02.518798 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:02.518809 | orchestrator | 2026-02-19 06:56:02.518820 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-19 06:56:02.518830 | orchestrator | Thursday 19 February 2026 06:55:52 +0000 (0:00:01.753) 1:12:38.609 ***** 2026-02-19 06:56:02.518841 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:02.518852 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:02.518863 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:02.518873 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:02.518884 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:02.518895 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:02.518905 | orchestrator | 2026-02-19 06:56:02.518916 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-19 06:56:02.518927 | orchestrator | Thursday 19 February 2026 06:55:54 +0000 (0:00:02.058) 1:12:40.668 ***** 2026-02-19 06:56:02.518944 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:02.518955 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:02.518966 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:02.518977 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:02.518988 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:02.518998 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.519040 | orchestrator | 2026-02-19 06:56:02.519052 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-19 06:56:02.519063 | orchestrator | Thursday 19 February 2026 06:55:56 +0000 (0:00:02.181) 1:12:42.849 ***** 2026-02-19 06:56:02.519073 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:02.519084 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:02.519095 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:02.519106 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:02.519116 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:02.519127 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:02.519137 | orchestrator | 2026-02-19 06:56:02.519148 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-19 06:56:02.519159 | orchestrator | Thursday 19 February 2026 06:55:58 +0000 (0:00:02.049) 1:12:44.899 ***** 2026-02-19 06:56:02.519170 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:02.519181 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:02.519192 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:02.519202 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:02.519213 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:02.519224 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:02.519235 | orchestrator | 2026-02-19 06:56:02.519245 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-19 06:56:02.519256 | orchestrator | Thursday 19 February 2026 06:56:00 +0000 (0:00:02.131) 1:12:47.030 ***** 2026-02-19 06:56:02.519267 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:02.519278 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:02.519289 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:02.519300 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:02.519311 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:02.519322 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:02.519333 | orchestrator | 2026-02-19 06:56:02.519351 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-19 06:56:57.401741 | orchestrator | Thursday 19 February 2026 06:56:02 +0000 (0:00:01.703) 1:12:48.733 ***** 2026-02-19 06:56:57.401852 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.401867 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.401876 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.401898 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:57.401907 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:57.401917 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:57.401926 | orchestrator | 2026-02-19 06:56:57.401942 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-19 06:56:57.402072 | orchestrator | Thursday 19 February 2026 06:56:04 +0000 (0:00:02.020) 1:12:50.754 ***** 2026-02-19 06:56:57.402092 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.402102 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.402111 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.402120 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:57.402129 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:57.402138 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:57.402147 | orchestrator | 2026-02-19 06:56:57.402156 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-19 06:56:57.402165 | orchestrator | Thursday 19 February 2026 06:56:06 +0000 (0:00:01.769) 1:12:52.523 ***** 2026-02-19 06:56:57.402174 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.402183 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.402192 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.402200 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:57.402232 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:57.402241 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:57.402250 | orchestrator | 2026-02-19 06:56:57.402259 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-19 06:56:57.402268 | orchestrator | Thursday 19 February 2026 06:56:08 +0000 (0:00:02.090) 1:12:54.614 ***** 2026-02-19 06:56:57.402277 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.402287 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.402297 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.402307 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:57.402336 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:57.402352 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:57.402368 | orchestrator | 2026-02-19 06:56:57.402383 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-19 06:56:57.402399 | orchestrator | Thursday 19 February 2026 06:56:10 +0000 (0:00:01.928) 1:12:56.542 ***** 2026-02-19 06:56:57.402415 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.402431 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.402447 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.402457 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:57.402466 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:57.402475 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:57.402483 | orchestrator | 2026-02-19 06:56:57.402492 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-19 06:56:57.402500 | orchestrator | Thursday 19 February 2026 06:56:12 +0000 (0:00:01.950) 1:12:58.493 ***** 2026-02-19 06:56:57.402509 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.402518 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:57.402526 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:57.402535 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:57.402544 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:57.402553 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:57.402562 | orchestrator | 2026-02-19 06:56:57.402570 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-19 06:56:57.402579 | orchestrator | Thursday 19 February 2026 06:56:13 +0000 (0:00:01.723) 1:13:00.216 ***** 2026-02-19 06:56:57.402588 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.402596 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:57.402605 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:57.402613 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:57.402622 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:57.402630 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:57.402639 | orchestrator | 2026-02-19 06:56:57.402648 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-19 06:56:57.402656 | orchestrator | Thursday 19 February 2026 06:56:15 +0000 (0:00:01.878) 1:13:02.095 ***** 2026-02-19 06:56:57.402665 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.402678 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:57.402693 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:57.402707 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:57.402722 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:57.402737 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:57.402752 | orchestrator | 2026-02-19 06:56:57.402767 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-19 06:56:57.402782 | orchestrator | Thursday 19 February 2026 06:56:17 +0000 (0:00:01.895) 1:13:03.991 ***** 2026-02-19 06:56:57.402797 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.402812 | orchestrator | 2026-02-19 06:56:57.402827 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-19 06:56:57.402842 | orchestrator | Thursday 19 February 2026 06:56:21 +0000 (0:00:03.525) 1:13:07.516 ***** 2026-02-19 06:56:57.402853 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.402862 | orchestrator | 2026-02-19 06:56:57.402871 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-19 06:56:57.402880 | orchestrator | Thursday 19 February 2026 06:56:24 +0000 (0:00:03.292) 1:13:10.809 ***** 2026-02-19 06:56:57.402898 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.402907 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:57.402916 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:57.402924 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:57.402932 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:57.402941 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:57.402976 | orchestrator | 2026-02-19 06:56:57.402985 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-19 06:56:57.402994 | orchestrator | Thursday 19 February 2026 06:56:27 +0000 (0:00:02.577) 1:13:13.386 ***** 2026-02-19 06:56:57.403002 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.403011 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:57.403019 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:57.403033 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:57.403047 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:57.403061 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:57.403075 | orchestrator | 2026-02-19 06:56:57.403090 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-19 06:56:57.403125 | orchestrator | Thursday 19 February 2026 06:56:29 +0000 (0:00:02.027) 1:13:15.414 ***** 2026-02-19 06:56:57.403143 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-19 06:56:57.403155 | orchestrator | 2026-02-19 06:56:57.403164 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-19 06:56:57.403173 | orchestrator | Thursday 19 February 2026 06:56:31 +0000 (0:00:02.599) 1:13:18.013 ***** 2026-02-19 06:56:57.403182 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:57.403190 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.403199 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:57.403207 | orchestrator | ok: [testbed-node-3] 2026-02-19 06:56:57.403216 | orchestrator | ok: [testbed-node-4] 2026-02-19 06:56:57.403224 | orchestrator | ok: [testbed-node-5] 2026-02-19 06:56:57.403233 | orchestrator | 2026-02-19 06:56:57.403242 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-19 06:56:57.403250 | orchestrator | Thursday 19 February 2026 06:56:34 +0000 (0:00:02.851) 1:13:20.864 ***** 2026-02-19 06:56:57.403259 | orchestrator | changed: [testbed-node-3] 2026-02-19 06:56:57.403268 | orchestrator | changed: [testbed-node-4] 2026-02-19 06:56:57.403276 | orchestrator | changed: [testbed-node-0] 2026-02-19 06:56:57.403285 | orchestrator | changed: [testbed-node-1] 2026-02-19 06:56:57.403294 | orchestrator | changed: [testbed-node-2] 2026-02-19 06:56:57.403302 | orchestrator | changed: [testbed-node-5] 2026-02-19 06:56:57.403311 | orchestrator | 2026-02-19 06:56:57.403320 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-19 06:56:57.403329 | orchestrator | 2026-02-19 06:56:57.403337 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:56:57.403346 | orchestrator | Thursday 19 February 2026 06:56:40 +0000 (0:00:05.361) 1:13:26.226 ***** 2026-02-19 06:56:57.403354 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.403363 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:57.403371 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:57.403381 | orchestrator | 2026-02-19 06:56:57.403403 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:56:57.403419 | orchestrator | Thursday 19 February 2026 06:56:41 +0000 (0:00:01.678) 1:13:27.904 ***** 2026-02-19 06:56:57.403433 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.403449 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:56:57.403464 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:56:57.403479 | orchestrator | 2026-02-19 06:56:57.403494 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-19 06:56:57.403507 | orchestrator | Thursday 19 February 2026 06:56:43 +0000 (0:00:01.528) 1:13:29.432 ***** 2026-02-19 06:56:57.403522 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:56:57.403536 | orchestrator | 2026-02-19 06:56:57.403560 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-19 06:56:57.403575 | orchestrator | Thursday 19 February 2026 06:56:45 +0000 (0:00:02.419) 1:13:31.852 ***** 2026-02-19 06:56:57.403589 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.403603 | orchestrator | 2026-02-19 06:56:57.403617 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-19 06:56:57.403631 | orchestrator | 2026-02-19 06:56:57.403646 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-19 06:56:57.403662 | orchestrator | Thursday 19 February 2026 06:56:47 +0000 (0:00:01.917) 1:13:33.770 ***** 2026-02-19 06:56:57.403694 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.403726 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.403743 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.403757 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:57.403771 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:57.403785 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:57.403799 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:56:57.403812 | orchestrator | 2026-02-19 06:56:57.403826 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:56:57.403840 | orchestrator | Thursday 19 February 2026 06:56:49 +0000 (0:00:02.079) 1:13:35.849 ***** 2026-02-19 06:56:57.403855 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.403869 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.403883 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.403897 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:57.403913 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:57.403928 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:57.403942 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:56:57.403981 | orchestrator | 2026-02-19 06:56:57.403990 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-19 06:56:57.403999 | orchestrator | Thursday 19 February 2026 06:56:51 +0000 (0:00:02.328) 1:13:38.178 ***** 2026-02-19 06:56:57.404008 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.404017 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.404025 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.404034 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:57.404042 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:57.404051 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:57.404060 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:56:57.404068 | orchestrator | 2026-02-19 06:56:57.404077 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-19 06:56:57.404086 | orchestrator | Thursday 19 February 2026 06:56:54 +0000 (0:00:02.325) 1:13:40.504 ***** 2026-02-19 06:56:57.404095 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.404103 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.404112 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.404120 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:56:57.404129 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:56:57.404137 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:56:57.404146 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:56:57.404154 | orchestrator | 2026-02-19 06:56:57.404163 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-19 06:56:57.404172 | orchestrator | Thursday 19 February 2026 06:56:56 +0000 (0:00:02.321) 1:13:42.826 ***** 2026-02-19 06:56:57.404180 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:56:57.404189 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:56:57.404198 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:56:57.404217 | orchestrator | skipping: [testbed-node-3] 2026-02-19 06:57:46.963807 | orchestrator | skipping: [testbed-node-4] 2026-02-19 06:57:46.963999 | orchestrator | skipping: [testbed-node-5] 2026-02-19 06:57:46.964033 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964053 | orchestrator | 2026-02-19 06:57:46.964075 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-19 06:57:46.964127 | orchestrator | 2026-02-19 06:57:46.964149 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-19 06:57:46.964169 | orchestrator | Thursday 19 February 2026 06:56:59 +0000 (0:00:03.161) 1:13:45.987 ***** 2026-02-19 06:57:46.964190 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-19 06:57:46.964202 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-19 06:57:46.964213 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-19 06:57:46.964224 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964235 | orchestrator | 2026-02-19 06:57:46.964247 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-19 06:57:46.964258 | orchestrator | Thursday 19 February 2026 06:57:00 +0000 (0:00:01.145) 1:13:47.132 ***** 2026-02-19 06:57:46.964269 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964279 | orchestrator | 2026-02-19 06:57:46.964290 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-19 06:57:46.964302 | orchestrator | Thursday 19 February 2026 06:57:02 +0000 (0:00:01.130) 1:13:48.262 ***** 2026-02-19 06:57:46.964315 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964328 | orchestrator | 2026-02-19 06:57:46.964341 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-19 06:57:46.964353 | orchestrator | Thursday 19 February 2026 06:57:03 +0000 (0:00:01.110) 1:13:49.373 ***** 2026-02-19 06:57:46.964366 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964378 | orchestrator | 2026-02-19 06:57:46.964391 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-19 06:57:46.964418 | orchestrator | Thursday 19 February 2026 06:57:04 +0000 (0:00:01.111) 1:13:50.484 ***** 2026-02-19 06:57:46.964431 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964444 | orchestrator | 2026-02-19 06:57:46.964456 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-19 06:57:46.964469 | orchestrator | Thursday 19 February 2026 06:57:05 +0000 (0:00:01.121) 1:13:51.605 ***** 2026-02-19 06:57:46.964481 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-19 06:57:46.964494 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-19 06:57:46.964506 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964518 | orchestrator | 2026-02-19 06:57:46.964530 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-19 06:57:46.964542 | orchestrator | Thursday 19 February 2026 06:57:06 +0000 (0:00:01.151) 1:13:52.757 ***** 2026-02-19 06:57:46.964554 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964565 | orchestrator | 2026-02-19 06:57:46.964578 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-19 06:57:46.964591 | orchestrator | Thursday 19 February 2026 06:57:07 +0000 (0:00:01.121) 1:13:53.878 ***** 2026-02-19 06:57:46.964603 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964615 | orchestrator | 2026-02-19 06:57:46.964627 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-19 06:57:46.964640 | orchestrator | Thursday 19 February 2026 06:57:08 +0000 (0:00:01.145) 1:13:55.024 ***** 2026-02-19 06:57:46.964652 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964664 | orchestrator | 2026-02-19 06:57:46.964674 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-19 06:57:46.964685 | orchestrator | Thursday 19 February 2026 06:57:10 +0000 (0:00:01.214) 1:13:56.238 ***** 2026-02-19 06:57:46.964696 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-19 06:57:46.964706 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-19 06:57:46.964717 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964728 | orchestrator | 2026-02-19 06:57:46.964739 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-19 06:57:46.964749 | orchestrator | Thursday 19 February 2026 06:57:11 +0000 (0:00:01.160) 1:13:57.399 ***** 2026-02-19 06:57:46.964769 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964780 | orchestrator | 2026-02-19 06:57:46.964791 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-19 06:57:46.964801 | orchestrator | Thursday 19 February 2026 06:57:12 +0000 (0:00:01.119) 1:13:58.519 ***** 2026-02-19 06:57:46.964812 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964824 | orchestrator | 2026-02-19 06:57:46.964834 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-19 06:57:46.964845 | orchestrator | Thursday 19 February 2026 06:57:13 +0000 (0:00:01.099) 1:13:59.619 ***** 2026-02-19 06:57:46.964856 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.964867 | orchestrator | 2026-02-19 06:57:46.964878 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-19 06:57:46.964889 | orchestrator | Thursday 19 February 2026 06:57:14 +0000 (0:00:01.174) 1:14:00.793 ***** 2026-02-19 06:57:46.964994 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:57:46.965007 | orchestrator | 2026-02-19 06:57:46.965018 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-19 06:57:46.965029 | orchestrator | 2026-02-19 06:57:46.965050 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-19 06:57:46.965069 | orchestrator | Thursday 19 February 2026 06:57:16 +0000 (0:00:01.911) 1:14:02.704 ***** 2026-02-19 06:57:46.965089 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:57:46.965108 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:57:46.965126 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:57:46.965147 | orchestrator | 2026-02-19 06:57:46.965167 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-19 06:57:46.965187 | orchestrator | Thursday 19 February 2026 06:57:17 +0000 (0:00:01.320) 1:14:04.025 ***** 2026-02-19 06:57:46.965207 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:57:46.965226 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:57:46.965276 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:57:46.965297 | orchestrator | 2026-02-19 06:57:46.965310 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-19 06:57:46.965321 | orchestrator | Thursday 19 February 2026 06:57:19 +0000 (0:00:01.356) 1:14:05.381 ***** 2026-02-19 06:57:46.965331 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:57:46.965345 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:57:46.965363 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:57:46.965391 | orchestrator | 2026-02-19 06:57:46.965411 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-19 06:57:46.965429 | orchestrator | Thursday 19 February 2026 06:57:20 +0000 (0:00:01.337) 1:14:06.719 ***** 2026-02-19 06:57:46.965446 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:57:46.965465 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:57:46.965484 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:57:46.965502 | orchestrator | 2026-02-19 06:57:46.965520 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-19 06:57:46.965531 | orchestrator | Thursday 19 February 2026 06:57:22 +0000 (0:00:01.620) 1:14:08.339 ***** 2026-02-19 06:57:46.965542 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:57:46.965553 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:57:46.965564 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:57:46.965574 | orchestrator | 2026-02-19 06:57:46.965585 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-19 06:57:46.965596 | orchestrator | Thursday 19 February 2026 06:57:23 +0000 (0:00:01.321) 1:14:09.660 ***** 2026-02-19 06:57:46.965607 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:57:46.965618 | orchestrator | skipping: [testbed-node-1] 2026-02-19 06:57:46.965629 | orchestrator | skipping: [testbed-node-2] 2026-02-19 06:57:46.965640 | orchestrator | 2026-02-19 06:57:46.965651 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-19 06:57:46.965725 | orchestrator | Thursday 19 February 2026 06:57:24 +0000 (0:00:01.423) 1:14:11.083 ***** 2026-02-19 06:57:46.965736 | orchestrator | skipping: [testbed-node-0] 2026-02-19 06:57:46.965748 | orchestrator | 2026-02-19 06:57:46.965768 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-19 06:57:46.965779 | orchestrator | 2026-02-19 06:57:46.965790 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-19 06:57:46.965801 | orchestrator | Thursday 19 February 2026 06:57:26 +0000 (0:00:01.885) 1:14:12.969 ***** 2026-02-19 06:57:46.965812 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:57:46.965823 | orchestrator | 2026-02-19 06:57:46.965834 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-19 06:57:46.965845 | orchestrator | Thursday 19 February 2026 06:57:28 +0000 (0:00:01.535) 1:14:14.505 ***** 2026-02-19 06:57:46.965855 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:57:46.965866 | orchestrator | 2026-02-19 06:57:46.965877 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-19 06:57:46.965888 | orchestrator | Thursday 19 February 2026 06:57:29 +0000 (0:00:01.135) 1:14:15.641 ***** 2026-02-19 06:57:46.965936 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:57:46.965956 | orchestrator | 2026-02-19 06:57:46.965974 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-19 06:57:46.965992 | orchestrator | Thursday 19 February 2026 06:57:30 +0000 (0:00:01.154) 1:14:16.796 ***** 2026-02-19 06:57:46.966006 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:57:46.966098 | orchestrator | 2026-02-19 06:57:46.966131 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-19 06:57:46.966149 | orchestrator | Thursday 19 February 2026 06:57:33 +0000 (0:00:03.058) 1:14:19.854 ***** 2026-02-19 06:57:46.966166 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:57:46.966178 | orchestrator | 2026-02-19 06:57:46.966188 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-19 06:57:46.966199 | orchestrator | Thursday 19 February 2026 06:57:37 +0000 (0:00:03.588) 1:14:23.442 ***** 2026-02-19 06:57:46.966217 | orchestrator | changed: [testbed-node-0] 2026-02-19 06:57:46.966235 | orchestrator | 2026-02-19 06:57:46.966254 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-19 06:57:46.966272 | orchestrator | 2026-02-19 06:57:46.966291 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-19 06:57:46.966310 | orchestrator | Thursday 19 February 2026 06:57:38 +0000 (0:00:01.766) 1:14:25.208 ***** 2026-02-19 06:57:46.966329 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:57:46.966346 | orchestrator | ok: [testbed-node-1] 2026-02-19 06:57:46.966364 | orchestrator | ok: [testbed-node-2] 2026-02-19 06:57:46.966382 | orchestrator | 2026-02-19 06:57:46.966401 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-19 06:57:46.966416 | orchestrator | Thursday 19 February 2026 06:57:40 +0000 (0:00:01.751) 1:14:26.959 ***** 2026-02-19 06:57:46.966435 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:57:46.966453 | orchestrator | 2026-02-19 06:57:46.966472 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-19 06:57:46.966491 | orchestrator | Thursday 19 February 2026 06:57:43 +0000 (0:00:02.358) 1:14:29.318 ***** 2026-02-19 06:57:46.966510 | orchestrator | ok: [testbed-node-0] 2026-02-19 06:57:46.966530 | orchestrator | 2026-02-19 06:57:46.966548 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 06:57:46.966566 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-19 06:57:46.966588 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-19 06:57:46.966608 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-02-19 06:57:46.966628 | orchestrator | testbed-node-1 : ok=191  changed=15  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-02-19 06:57:46.966682 | orchestrator | testbed-node-2 : ok=196  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-02-19 06:57:47.623257 | orchestrator | testbed-node-3 : ok=316  changed=22  unreachable=0 failed=0 skipped=362  rescued=0 ignored=0 2026-02-19 06:57:47.623357 | orchestrator | testbed-node-4 : ok=302  changed=18  unreachable=0 failed=0 skipped=345  rescued=0 ignored=0 2026-02-19 06:57:47.623372 | orchestrator | testbed-node-5 : ok=309  changed=17  unreachable=0 failed=0 skipped=358  rescued=0 ignored=0 2026-02-19 06:57:47.623384 | orchestrator | 2026-02-19 06:57:47.623396 | orchestrator | 2026-02-19 06:57:47.623407 | orchestrator | 2026-02-19 06:57:47.623418 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 06:57:47.623430 | orchestrator | Thursday 19 February 2026 06:57:46 +0000 (0:00:03.840) 1:14:33.158 ***** 2026-02-19 06:57:47.623441 | orchestrator | =============================================================================== 2026-02-19 06:57:47.623452 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 82.02s 2026-02-19 06:57:47.623463 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 78.20s 2026-02-19 06:57:47.623473 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 34.38s 2026-02-19 06:57:47.623484 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 34.18s 2026-02-19 06:57:47.623495 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 33.81s 2026-02-19 06:57:47.623505 | orchestrator | Gather and delegate facts ---------------------------------------------- 31.10s 2026-02-19 06:57:47.623535 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 30.76s 2026-02-19 06:57:47.623547 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 30.34s 2026-02-19 06:57:47.623557 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 23.12s 2026-02-19 06:57:47.623568 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.99s 2026-02-19 06:57:47.623579 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.90s 2026-02-19 06:57:47.623589 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.90s 2026-02-19 06:57:47.623600 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.58s 2026-02-19 06:57:47.623611 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.63s 2026-02-19 06:57:47.623621 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 13.09s 2026-02-19 06:57:47.623632 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.89s 2026-02-19 06:57:47.623643 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.85s 2026-02-19 06:57:47.623653 | orchestrator | Set cluster configs ---------------------------------------------------- 11.03s 2026-02-19 06:57:47.623664 | orchestrator | Stop ceph mon ---------------------------------------------------------- 10.65s 2026-02-19 06:57:47.623675 | orchestrator | ceph-infra : Update cache for Debian based OSs -------------------------- 8.52s 2026-02-19 06:57:47.894163 | orchestrator | + osism apply cephclient 2026-02-19 06:57:49.924026 | orchestrator | 2026-02-19 06:57:49 | INFO  | Task 40b8ebc2-652c-4a83-a9b7-4f0f4f8dd05e (cephclient) was prepared for execution. 2026-02-19 06:57:49.924156 | orchestrator | 2026-02-19 06:57:49 | INFO  | It takes a moment until task 40b8ebc2-652c-4a83-a9b7-4f0f4f8dd05e (cephclient) has been started and output is visible here. 2026-02-19 06:58:17.683158 | orchestrator | 2026-02-19 06:58:17.683278 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-19 06:58:17.683321 | orchestrator | 2026-02-19 06:58:17.683334 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-19 06:58:17.683345 | orchestrator | Thursday 19 February 2026 06:57:56 +0000 (0:00:02.234) 0:00:02.234 ***** 2026-02-19 06:58:17.683357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-19 06:58:17.683370 | orchestrator | 2026-02-19 06:58:17.683382 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-19 06:58:17.683393 | orchestrator | Thursday 19 February 2026 06:57:58 +0000 (0:00:01.794) 0:00:04.029 ***** 2026-02-19 06:58:17.683405 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-19 06:58:17.683415 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-19 06:58:17.683427 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-19 06:58:17.683438 | orchestrator | 2026-02-19 06:58:17.683449 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-19 06:58:17.683460 | orchestrator | Thursday 19 February 2026 06:58:00 +0000 (0:00:02.481) 0:00:06.511 ***** 2026-02-19 06:58:17.683471 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-19 06:58:17.683482 | orchestrator | 2026-02-19 06:58:17.683493 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-19 06:58:17.683504 | orchestrator | Thursday 19 February 2026 06:58:02 +0000 (0:00:01.965) 0:00:08.476 ***** 2026-02-19 06:58:17.683515 | orchestrator | ok: [testbed-manager] 2026-02-19 06:58:17.683526 | orchestrator | 2026-02-19 06:58:17.683537 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-19 06:58:17.683547 | orchestrator | Thursday 19 February 2026 06:58:04 +0000 (0:00:01.817) 0:00:10.293 ***** 2026-02-19 06:58:17.683558 | orchestrator | ok: [testbed-manager] 2026-02-19 06:58:17.683569 | orchestrator | 2026-02-19 06:58:17.683580 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-19 06:58:17.683590 | orchestrator | Thursday 19 February 2026 06:58:06 +0000 (0:00:01.844) 0:00:12.138 ***** 2026-02-19 06:58:17.683601 | orchestrator | ok: [testbed-manager] 2026-02-19 06:58:17.683612 | orchestrator | 2026-02-19 06:58:17.683623 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-19 06:58:17.683634 | orchestrator | Thursday 19 February 2026 06:58:08 +0000 (0:00:01.998) 0:00:14.137 ***** 2026-02-19 06:58:17.683644 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-19 06:58:17.683655 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-19 06:58:17.683666 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-19 06:58:17.683677 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-19 06:58:17.683688 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-19 06:58:17.683698 | orchestrator | 2026-02-19 06:58:17.683711 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-19 06:58:17.683723 | orchestrator | Thursday 19 February 2026 06:58:13 +0000 (0:00:04.688) 0:00:18.825 ***** 2026-02-19 06:58:17.683736 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-19 06:58:17.683748 | orchestrator | 2026-02-19 06:58:17.683760 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-19 06:58:17.683773 | orchestrator | Thursday 19 February 2026 06:58:14 +0000 (0:00:01.453) 0:00:20.279 ***** 2026-02-19 06:58:17.683785 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:58:17.683798 | orchestrator | 2026-02-19 06:58:17.683810 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-19 06:58:17.683823 | orchestrator | Thursday 19 February 2026 06:58:15 +0000 (0:00:01.081) 0:00:21.360 ***** 2026-02-19 06:58:17.683849 | orchestrator | skipping: [testbed-manager] 2026-02-19 06:58:17.683861 | orchestrator | 2026-02-19 06:58:17.683896 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-19 06:58:17.683907 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-19 06:58:17.683929 | orchestrator | 2026-02-19 06:58:17.683940 | orchestrator | 2026-02-19 06:58:17.683951 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-19 06:58:17.683962 | orchestrator | Thursday 19 February 2026 06:58:17 +0000 (0:00:01.580) 0:00:22.940 ***** 2026-02-19 06:58:17.683973 | orchestrator | =============================================================================== 2026-02-19 06:58:17.683984 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.69s 2026-02-19 06:58:17.683994 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.48s 2026-02-19 06:58:17.684005 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.00s 2026-02-19 06:58:17.684016 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.97s 2026-02-19 06:58:17.684027 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.84s 2026-02-19 06:58:17.684037 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.82s 2026-02-19 06:58:17.684048 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.79s 2026-02-19 06:58:17.684059 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.58s 2026-02-19 06:58:17.684069 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.45s 2026-02-19 06:58:17.684080 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.08s 2026-02-19 06:58:17.979030 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-19 06:58:17.979135 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-19 06:58:17.988548 | orchestrator | + set -e 2026-02-19 06:58:17.989440 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-19 06:58:17.989473 | orchestrator | ++ export INTERACTIVE=false 2026-02-19 06:58:17.989486 | orchestrator | ++ INTERACTIVE=false 2026-02-19 06:58:17.989496 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-19 06:58:17.989505 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-19 06:58:17.989515 | orchestrator | + source /opt/manager-vars.sh 2026-02-19 06:58:17.989525 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-19 06:58:17.989535 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-19 06:58:17.989544 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-19 06:58:17.989554 | orchestrator | ++ CEPH_VERSION=reef 2026-02-19 06:58:17.989564 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-19 06:58:17.989574 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-19 06:58:17.989584 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-19 06:58:17.989594 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-19 06:58:17.989603 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-19 06:58:17.989621 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-19 06:58:17.989638 | orchestrator | ++ export ARA=false 2026-02-19 06:58:17.989655 | orchestrator | ++ ARA=false 2026-02-19 06:58:17.989672 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-19 06:58:17.989688 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-19 06:58:17.989705 | orchestrator | ++ export TEMPEST=false 2026-02-19 06:58:17.989721 | orchestrator | ++ TEMPEST=false 2026-02-19 06:58:17.989770 | orchestrator | ++ export IS_ZUUL=true 2026-02-19 06:58:17.989787 | orchestrator | ++ IS_ZUUL=true 2026-02-19 06:58:17.989804 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 06:58:17.989820 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-02-19 06:58:17.989838 | orchestrator | ++ export EXTERNAL_API=false 2026-02-19 06:58:17.990094 | orchestrator | ++ EXTERNAL_API=false 2026-02-19 06:58:17.990117 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-19 06:58:17.990133 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-19 06:58:17.990151 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-19 06:58:17.990167 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-19 06:58:17.990182 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-19 06:58:17.990193 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-19 06:58:17.990202 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-19 06:58:17.990211 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-19 06:58:17.990221 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-19 06:58:17.990241 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-19 06:58:17.996716 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-19 06:58:17.996842 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-19 06:58:17.996938 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-19 06:58:17.996961 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-19 06:58:37.903630 | orchestrator | 2026-02-19 06:58:37 | ERROR  | Unable to get ansible vault password 2026-02-19 06:58:37.903747 | orchestrator | 2026-02-19 06:58:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-19 06:58:37.903765 | orchestrator | 2026-02-19 06:58:37 | ERROR  | Dropping encrypted entries 2026-02-19 06:58:37.940478 | orchestrator | 2026-02-19 06:58:37 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-19 06:58:37.941506 | orchestrator | 2026-02-19 06:58:37 | INFO  | Kolla configuration check passed 2026-02-19 06:58:38.166998 | orchestrator | 2026-02-19 06:58:38 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-19 06:58:38.185338 | orchestrator | 2026-02-19 06:58:38 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-19 06:58:38.390612 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-19 06:58:55.838462 | orchestrator | 2026-02-19 06:58:55 | ERROR  | Unable to get ansible vault password 2026-02-19 06:58:55.838617 | orchestrator | 2026-02-19 06:58:55 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-19 06:58:55.838648 | orchestrator | 2026-02-19 06:58:55 | ERROR  | Dropping encrypted entries 2026-02-19 06:58:55.872902 | orchestrator | 2026-02-19 06:58:55 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-19 06:58:56.037378 | orchestrator | 2026-02-19 06:58:56 | INFO  | Found 204 classic queue(s) in vhost '/': 2026-02-19 06:58:56.037480 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-19 06:58:56.037498 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-19 06:58:56.037514 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-19 06:58:56.037529 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-19 06:58:56.037544 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - barbican.workers_fanout_3f0208ec1aef4a4d85facdd4d97ddde9 (vhost: /, messages: 0) 2026-02-19 06:58:56.037559 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - barbican.workers_fanout_520c0a1a41a14fef928b5fb1e44e41ac (vhost: /, messages: 0) 2026-02-19 06:58:56.037573 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - barbican.workers_fanout_e7e9c654690d41f8940012d3b76d5da5 (vhost: /, messages: 0) 2026-02-19 06:58:56.037588 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-19 06:58:56.037602 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central (vhost: /, messages: 0) 2026-02-19 06:58:56.037616 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.037630 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.037644 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.037658 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central_fanout_331ad9fea54f4710acf675930bc049d7 (vhost: /, messages: 0) 2026-02-19 06:58:56.037672 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central_fanout_59d51a30dce64ea687f29c92c7914669 (vhost: /, messages: 0) 2026-02-19 06:58:56.037714 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central_fanout_761b87a7aa32491d84febd90ff55806d (vhost: /, messages: 0) 2026-02-19 06:58:56.037728 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central_fanout_92e6681b39184690ae47d8be82753c27 (vhost: /, messages: 0) 2026-02-19 06:58:56.037742 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central_fanout_936001eb96b44653be42adb378597cd5 (vhost: /, messages: 0) 2026-02-19 06:58:56.037756 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - central_fanout_b6c2c4d40a72446681f53a4910114f95 (vhost: /, messages: 0) 2026-02-19 06:58:56.037769 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-19 06:58:56.037783 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.037797 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.037810 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.037824 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-backup_fanout_3cb3198f7329428c94685ae8fe8a77ec (vhost: /, messages: 0) 2026-02-19 06:58:56.037880 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-backup_fanout_8c79530a22b94fc4a0c9fe8def54d4e3 (vhost: /, messages: 0) 2026-02-19 06:58:56.037895 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-backup_fanout_d85dbf62b9364f4ea1e8e589b650b31c (vhost: /, messages: 0) 2026-02-19 06:58:56.037909 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-19 06:58:56.037922 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.037937 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.037953 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.037968 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-scheduler_fanout_9dd84e936cfa4dd5955f04fe71711757 (vhost: /, messages: 0) 2026-02-19 06:58:56.037984 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-scheduler_fanout_a349583c21e647dfa8b5b9486ed3ef1f (vhost: /, messages: 0) 2026-02-19 06:58:56.037999 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-scheduler_fanout_f1901b3a783241c5a15164e1ca57bc9c (vhost: /, messages: 0) 2026-02-19 06:58:56.038100 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-19 06:58:56.038118 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-19 06:58:56.038135 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.038151 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_c01d46a2d6bc42dfbf24cafcb395fa08 (vhost: /, messages: 0) 2026-02-19 06:58:56.038167 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-19 06:58:56.038183 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.038200 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_d46fde604a97458f8cdd3e07f83b5b03 (vhost: /, messages: 0) 2026-02-19 06:58:56.038216 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-19 06:58:56.038242 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.038258 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_8b71f89ad36041b6a8b67b0f9bb1fdce (vhost: /, messages: 0) 2026-02-19 06:58:56.038275 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume_fanout_39f2626aa27e45079613198dbe3f2477 (vhost: /, messages: 0) 2026-02-19 06:58:56.038291 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume_fanout_49d6b3abd9074b02a1a627322b850edc (vhost: /, messages: 0) 2026-02-19 06:58:56.040873 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - cinder-volume_fanout_7fde8c6d57a34e02b19795aad95314e3 (vhost: /, messages: 0) 2026-02-19 06:58:56.040900 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-19 06:58:56.040913 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-19 06:58:56.040925 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-19 06:58:56.040937 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-19 06:58:56.040949 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - compute_fanout_51c5e61abc1d4b188d58f50c2b292739 (vhost: /, messages: 0) 2026-02-19 06:58:56.040961 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - compute_fanout_5754db356a9841bb9b42d2ed9f7ca254 (vhost: /, messages: 0) 2026-02-19 06:58:56.040972 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - compute_fanout_d69901f7cb4249b09cd551668791e46a (vhost: /, messages: 0) 2026-02-19 06:58:56.041054 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-19 06:58:56.041067 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.041080 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.041092 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.041104 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor_fanout_272b27176edc477e9f2679f7435a916d (vhost: /, messages: 0) 2026-02-19 06:58:56.041116 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor_fanout_b1d121d65890433ca68fbf50ada63a74 (vhost: /, messages: 0) 2026-02-19 06:58:56.041129 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor_fanout_ba8d48ad29ab4a6e9fc1235a310ef234 (vhost: /, messages: 0) 2026-02-19 06:58:56.041141 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor_fanout_bd5a68e2913d4ba3864f4162bdd25db1 (vhost: /, messages: 0) 2026-02-19 06:58:56.041153 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor_fanout_cca202b6ff3e4b458f5d2b020cddc50b (vhost: /, messages: 0) 2026-02-19 06:58:56.041165 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - conductor_fanout_d667f8a023ed40d18891489a5c1fa15a (vhost: /, messages: 0) 2026-02-19 06:58:56.041178 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - event.sample (vhost: /, messages: 3) 2026-02-19 06:58:56.041374 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-19 06:58:56.041396 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor.4bktsiqf3fzn (vhost: /, messages: 0) 2026-02-19 06:58:56.041409 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor.gk43grt5mevx (vhost: /, messages: 0) 2026-02-19 06:58:56.041552 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor.kxxrxhosureo (vhost: /, messages: 0) 2026-02-19 06:58:56.041566 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_1a6553906dfb4e11a537ce57c006f81f (vhost: /, messages: 0) 2026-02-19 06:58:56.041578 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_1e2e444d1d2449baa0128172ef9964ea (vhost: /, messages: 0) 2026-02-19 06:58:56.041590 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_41eea2cec68140c08667d838fe14ff0d (vhost: /, messages: 0) 2026-02-19 06:58:56.041601 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_4239b96e46984c3c95b4d8d6cfc63a0a (vhost: /, messages: 0) 2026-02-19 06:58:56.041613 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_a23e6b27c6084a00976c74357a5a0968 (vhost: /, messages: 0) 2026-02-19 06:58:56.041685 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_a76a0b241d2f40b780335fa45eb86b03 (vhost: /, messages: 0) 2026-02-19 06:58:56.041699 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_b3a52142086a4cd9a18a747bce764fb3 (vhost: /, messages: 0) 2026-02-19 06:58:56.041711 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_d4c39645174e483d96b5942c9b051680 (vhost: /, messages: 0) 2026-02-19 06:58:56.041723 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - magnum-conductor_fanout_d6eb3822554942949f86cc3fb4539be1 (vhost: /, messages: 0) 2026-02-19 06:58:56.041805 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-19 06:58:56.041819 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.041910 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.041922 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.041934 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-data_fanout_1f16beb953904ee4b81e7e8e1a9884c4 (vhost: /, messages: 0) 2026-02-19 06:58:56.041946 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-data_fanout_e4bc8727e85341f1a35e0b4a0283600a (vhost: /, messages: 0) 2026-02-19 06:58:56.041958 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-19 06:58:56.041970 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.041981 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.041993 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.042004 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-scheduler_fanout_1542963d632249ae8778ae668496686f (vhost: /, messages: 0) 2026-02-19 06:58:56.042158 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-scheduler_fanout_b114d740f48b42a4bc5bc4b62daba51a (vhost: /, messages: 0) 2026-02-19 06:58:56.042173 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-scheduler_fanout_ba3f3722dad04d3ab024e403bf8a74f9 (vhost: /, messages: 0) 2026-02-19 06:58:56.042242 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-19 06:58:56.042253 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-19 06:58:56.042320 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-19 06:58:56.042396 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-19 06:58:56.042410 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-share_fanout_2f9981788a69412d9aca07bbe3012036 (vhost: /, messages: 0) 2026-02-19 06:58:56.042420 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-share_fanout_3b8de699b04d4171a5c60d37f748558d (vhost: /, messages: 0) 2026-02-19 06:58:56.042431 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - manila-share_fanout_5205aef375384f8786105e5f0d311029 (vhost: /, messages: 0) 2026-02-19 06:58:56.042505 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-19 06:58:56.042516 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-19 06:58:56.042527 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-19 06:58:56.042593 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-19 06:58:56.042606 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-19 06:58:56.042616 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-19 06:58:56.042683 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-19 06:58:56.042695 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-19 06:58:56.042706 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.042717 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.042785 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.042804 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - octavia_provisioning_v2_fanout_8b9df4bfe497488b97b924f1e5dde2ec (vhost: /, messages: 0) 2026-02-19 06:58:56.042889 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - octavia_provisioning_v2_fanout_c9ab7840923f436d9e42ecdf298402cf (vhost: /, messages: 0) 2026-02-19 06:58:56.042902 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-19 06:58:56.042912 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.042923 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.042934 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.042945 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer_fanout_4298e851ddfe4766ade2ecc8f790314a (vhost: /, messages: 0) 2026-02-19 06:58:56.043012 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer_fanout_7d255afdb43c4c919826a71334746416 (vhost: /, messages: 0) 2026-02-19 06:58:56.043023 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer_fanout_905b8412f951499cb81c708479f81b32 (vhost: /, messages: 0) 2026-02-19 06:58:56.043034 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer_fanout_aa86c559765144a3ab51c85f7eb368be (vhost: /, messages: 0) 2026-02-19 06:58:56.043045 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - producer_fanout_fd7a74ac55e8414ea3493fa89ca1aa22 (vhost: /, messages: 0) 2026-02-19 06:58:56.043113 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-19 06:58:56.043126 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.043148 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.043308 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.043336 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_09426b1ebc4a4171b56dc8850c8b8fe0 (vhost: /, messages: 0) 2026-02-19 06:58:56.044050 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_159fcbf8321a45d0af39454f897f65c2 (vhost: /, messages: 0) 2026-02-19 06:58:56.044149 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_1fa6de09dc39438ca69ff898872fd5af (vhost: /, messages: 0) 2026-02-19 06:58:56.044173 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_2e4adfb355d142679375f9e44a43babf (vhost: /, messages: 0) 2026-02-19 06:58:56.044191 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_35d79ef0f3cf4380b89ee18d325615dd (vhost: /, messages: 0) 2026-02-19 06:58:56.046193 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_5a82355ee82645da8ef52de7b3cd1413 (vhost: /, messages: 0) 2026-02-19 06:58:56.046265 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_7d0090951a23488da9958475c6d7d1e0 (vhost: /, messages: 0) 2026-02-19 06:58:56.046299 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_93931c9e8f1b4ece9d8cea050be040a1 (vhost: /, messages: 0) 2026-02-19 06:58:56.046321 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-plugin_fanout_c89368c86f6545608c0dbd4f2ea3d9cc (vhost: /, messages: 0) 2026-02-19 06:58:56.046332 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-19 06:58:56.046342 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.046352 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.046361 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.046379 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_0fd6636c38ac4d0696c4d0d4d9c2f329 (vhost: /, messages: 0) 2026-02-19 06:58:56.046388 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_1f34a40b05cf4cc2909fa86b847119ae (vhost: /, messages: 0) 2026-02-19 06:58:56.046397 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_2b31cf8e872b4d11ba5374b643dd1743 (vhost: /, messages: 0) 2026-02-19 06:58:56.046406 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_5968418ba7d64eebaa62bab15a050827 (vhost: /, messages: 0) 2026-02-19 06:58:56.046415 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_5b4cd421a6994720a9e24c003875f084 (vhost: /, messages: 0) 2026-02-19 06:58:56.046424 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_70eb34b8f8e84d0ca2cf5b9039399e33 (vhost: /, messages: 0) 2026-02-19 06:58:56.046432 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_71b3e501d0ff4cd3a751c7120dfa1b47 (vhost: /, messages: 0) 2026-02-19 06:58:56.046441 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_76acf2f56ba2491f97cf9303607bb5d8 (vhost: /, messages: 0) 2026-02-19 06:58:56.046450 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_7e825009243a4fd5947baf6a08e19671 (vhost: /, messages: 0) 2026-02-19 06:58:56.046459 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_89f140ce289347f68ecd6f8e44bf7a59 (vhost: /, messages: 0) 2026-02-19 06:58:56.046489 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_9341eeb8d1894b0fb11b3371f992c636 (vhost: /, messages: 0) 2026-02-19 06:58:56.046498 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_9d19248e775d43c9a8a95d09ea6f5124 (vhost: /, messages: 0) 2026-02-19 06:58:56.046507 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_9e3b797bf55d4a3fa743f02ed4a1e08a (vhost: /, messages: 0) 2026-02-19 06:58:56.046516 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_bea8de14edd14282bcb89761fa6e9a46 (vhost: /, messages: 0) 2026-02-19 06:58:56.046524 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_c29844654e0a41bc8e8dabc2340f34d2 (vhost: /, messages: 0) 2026-02-19 06:58:56.046533 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_ea2d199ff8ea4068b3028ab1478b2d31 (vhost: /, messages: 0) 2026-02-19 06:58:56.046542 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_f5399e735a5e4bc8be0c1e622fee4a12 (vhost: /, messages: 0) 2026-02-19 06:58:56.046550 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-reports-plugin_fanout_f93074d6b2a34716869c903172f299c2 (vhost: /, messages: 0) 2026-02-19 06:58:56.046559 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-19 06:58:56.046568 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.046577 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.046586 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.046610 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_047413b0a3f34247a42ab3bec1305700 (vhost: /, messages: 0) 2026-02-19 06:58:56.046621 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_2ca86e25f96747a0abbada729c6b61d9 (vhost: /, messages: 0) 2026-02-19 06:58:56.046635 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_40e259919f9641c5b7501d992e2b5d01 (vhost: /, messages: 0) 2026-02-19 06:58:56.046644 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_48b442d052c4405b83f5955b8d14498c (vhost: /, messages: 0) 2026-02-19 06:58:56.046653 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_4a03d5cfdc434e07a755104b358c10c8 (vhost: /, messages: 0) 2026-02-19 06:58:56.046662 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_83dc3ec3e5014e9a9640c9860ac29fcc (vhost: /, messages: 0) 2026-02-19 06:58:56.046671 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_b495ab484a7b4787808eef068f304cbb (vhost: /, messages: 0) 2026-02-19 06:58:56.046680 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_bad7cdc2a1784a168d1d6f5bb0f14af5 (vhost: /, messages: 0) 2026-02-19 06:58:56.046689 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - q-server-resource-versions_fanout_c9c803c98e284a82a6c855e3c217aa71 (vhost: /, messages: 0) 2026-02-19 06:58:56.046698 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_06e246154fdc4a148942d9bf8830764c (vhost: /, messages: 0) 2026-02-19 06:58:56.046707 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_166cb203fd1a463c811103ca0d58583a (vhost: /, messages: 0) 2026-02-19 06:58:56.046716 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_313d1e0ae07740a6a9c34f264ccd1a3c (vhost: /, messages: 0) 2026-02-19 06:58:56.046731 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_35ae67bc4cb749c28b01d9b63fc9b9c1 (vhost: /, messages: 0) 2026-02-19 06:58:56.047819 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_3f506d72296540129159999b8459e289 (vhost: /, messages: 0) 2026-02-19 06:58:56.047936 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_4d3fed3594b946b984de63201c3fa152 (vhost: /, messages: 0) 2026-02-19 06:58:56.047953 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_4f69d5405f68423ab3f6369f8abd262b (vhost: /, messages: 0) 2026-02-19 06:58:56.048640 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_640199dec233438d82c2869223c0e295 (vhost: /, messages: 0) 2026-02-19 06:58:56.048665 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_7f27f04d20744134be42ae54ba1d9790 (vhost: /, messages: 0) 2026-02-19 06:58:56.048679 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_84538135c0d04246871f24b19eec5ada (vhost: /, messages: 0) 2026-02-19 06:58:56.048691 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_acd45ade14b444519f772e577a4a980e (vhost: /, messages: 0) 2026-02-19 06:58:56.048704 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_ad501c0e0cd74994b4ae9105c3a3610f (vhost: /, messages: 0) 2026-02-19 06:58:56.048717 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_b84a5716d1c94940a63dddbc025e70b3 (vhost: /, messages: 0) 2026-02-19 06:58:56.048730 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_bd1761d6adac4e05944e65e5316f0b90 (vhost: /, messages: 0) 2026-02-19 06:58:56.048743 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_e525c371d1444441aa9d782376b5ce2b (vhost: /, messages: 0) 2026-02-19 06:58:56.048756 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_e6968c3f03c4457690f3996dc35f93c3 (vhost: /, messages: 0) 2026-02-19 06:58:56.048767 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_f81d30fa1c27427da3f009c3e96a8c8b (vhost: /, messages: 0) 2026-02-19 06:58:56.048778 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - reply_fb82d9f61d9e49558b52da9bd9702ce1 (vhost: /, messages: 0) 2026-02-19 06:58:56.048789 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-19 06:58:56.048801 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.048813 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.048823 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.048868 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler_fanout_6022cf2e6d374c02a8fcc9bcc88bcc17 (vhost: /, messages: 0) 2026-02-19 06:58:56.049437 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler_fanout_78f76ce0184b4cfe81da2dd02a280b9e (vhost: /, messages: 0) 2026-02-19 06:58:56.049591 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler_fanout_8d334ce8782f48518357abc2595cddda (vhost: /, messages: 0) 2026-02-19 06:58:56.049622 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler_fanout_939ddbff405846f5b78af1a8718a8850 (vhost: /, messages: 0) 2026-02-19 06:58:56.049727 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler_fanout_c4a1801d7ac54f5d996de25e35069a24 (vhost: /, messages: 0) 2026-02-19 06:58:56.049817 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - scheduler_fanout_db9bec370c9444a580d7b9db49c1ade3 (vhost: /, messages: 0) 2026-02-19 06:58:56.049967 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-19 06:58:56.049980 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-19 06:58:56.050052 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-19 06:58:56.050068 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-19 06:58:56.050160 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker_fanout_1a7eeb5d2af74bbd9bbce6e057a3f0db (vhost: /, messages: 0) 2026-02-19 06:58:56.050184 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker_fanout_6ab0d64072fd4dbf8473eceaea366041 (vhost: /, messages: 0) 2026-02-19 06:58:56.050196 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker_fanout_7401d312f6994b9cbc5fd5c94ef77399 (vhost: /, messages: 0) 2026-02-19 06:58:56.050208 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker_fanout_960e0bdce9c54829bd34fbd150cf61be (vhost: /, messages: 0) 2026-02-19 06:58:56.050219 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker_fanout_b3b1c34716bc437ca0aae252e768ae3b (vhost: /, messages: 0) 2026-02-19 06:58:56.050231 | orchestrator | 2026-02-19 06:58:56 | INFO  |  - worker_fanout_f18cea356d764293852e4bee669fdda1 (vhost: /, messages: 0) 2026-02-19 06:58:56.235733 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-19 06:58:57.749540 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-19 06:58:57.749655 | orchestrator | [--no-close-connections] [--quorum] 2026-02-19 06:58:57.749681 | orchestrator | [--vhost VHOST] 2026-02-19 06:58:57.749695 | orchestrator | [{list,delete,prepare,check}] 2026-02-19 06:58:57.749708 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-19 06:58:57.749721 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-19 06:58:58.366468 | orchestrator | ERROR 2026-02-19 06:58:58.366689 | orchestrator | { 2026-02-19 06:58:58.366726 | orchestrator | "delta": "2:00:24.837481", 2026-02-19 06:58:58.366749 | orchestrator | "end": "2026-02-19 06:58:57.934831", 2026-02-19 06:58:58.366769 | orchestrator | "msg": "non-zero return code", 2026-02-19 06:58:58.366788 | orchestrator | "rc": 2, 2026-02-19 06:58:58.366806 | orchestrator | "start": "2026-02-19 04:58:33.097350" 2026-02-19 06:58:58.366824 | orchestrator | } failure 2026-02-19 06:58:58.674882 | 2026-02-19 06:58:58.675435 | PLAY RECAP 2026-02-19 06:58:58.675750 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-19 06:58:58.675822 | 2026-02-19 06:58:58.966941 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-19 06:58:58.969283 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-19 06:58:59.762718 | 2026-02-19 06:58:59.762932 | PLAY [Post output play] 2026-02-19 06:58:59.780855 | 2026-02-19 06:58:59.781012 | LOOP [stage-output : Register sources] 2026-02-19 06:58:59.832723 | 2026-02-19 06:58:59.832941 | TASK [stage-output : Check sudo] 2026-02-19 06:59:00.709516 | orchestrator | sudo: a password is required 2026-02-19 06:59:00.868489 | orchestrator | ok: Runtime: 0:00:00.018155 2026-02-19 06:59:00.883193 | 2026-02-19 06:59:00.883354 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-19 06:59:00.924837 | 2026-02-19 06:59:00.925108 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-19 06:59:01.006642 | orchestrator | ok 2026-02-19 06:59:01.015798 | 2026-02-19 06:59:01.015938 | LOOP [stage-output : Ensure target folders exist] 2026-02-19 06:59:01.505228 | orchestrator | ok: "docs" 2026-02-19 06:59:01.505583 | 2026-02-19 06:59:01.765792 | orchestrator | ok: "artifacts" 2026-02-19 06:59:02.088808 | orchestrator | ok: "logs" 2026-02-19 06:59:02.110591 | 2026-02-19 06:59:02.110784 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-19 06:59:02.149977 | 2026-02-19 06:59:02.150341 | TASK [stage-output : Make all log files readable] 2026-02-19 06:59:02.444892 | orchestrator | ok 2026-02-19 06:59:02.454708 | 2026-02-19 06:59:02.454883 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-19 06:59:02.491304 | orchestrator | skipping: Conditional result was False 2026-02-19 06:59:02.503092 | 2026-02-19 06:59:02.503281 | TASK [stage-output : Discover log files for compression] 2026-02-19 06:59:02.527317 | orchestrator | skipping: Conditional result was False 2026-02-19 06:59:02.538335 | 2026-02-19 06:59:02.538482 | LOOP [stage-output : Archive everything from logs] 2026-02-19 06:59:02.584642 | 2026-02-19 06:59:02.584810 | PLAY [Post cleanup play] 2026-02-19 06:59:02.593967 | 2026-02-19 06:59:02.594067 | TASK [Set cloud fact (Zuul deployment)] 2026-02-19 06:59:02.636692 | orchestrator | ok 2026-02-19 06:59:02.647052 | 2026-02-19 06:59:02.647193 | TASK [Set cloud fact (local deployment)] 2026-02-19 06:59:02.681082 | orchestrator | skipping: Conditional result was False 2026-02-19 06:59:02.696858 | 2026-02-19 06:59:02.697007 | TASK [Clean the cloud environment] 2026-02-19 06:59:03.327199 | orchestrator | 2026-02-19 06:59:03 - clean up servers 2026-02-19 06:59:04.130894 | orchestrator | 2026-02-19 06:59:04 - testbed-manager 2026-02-19 06:59:04.214390 | orchestrator | 2026-02-19 06:59:04 - testbed-node-2 2026-02-19 06:59:04.301682 | orchestrator | 2026-02-19 06:59:04 - testbed-node-3 2026-02-19 06:59:04.387675 | orchestrator | 2026-02-19 06:59:04 - testbed-node-4 2026-02-19 06:59:04.474062 | orchestrator | 2026-02-19 06:59:04 - testbed-node-1 2026-02-19 06:59:04.572021 | orchestrator | 2026-02-19 06:59:04 - testbed-node-0 2026-02-19 06:59:04.666682 | orchestrator | 2026-02-19 06:59:04 - testbed-node-5 2026-02-19 06:59:04.760327 | orchestrator | 2026-02-19 06:59:04 - clean up keypairs 2026-02-19 06:59:04.781503 | orchestrator | 2026-02-19 06:59:04 - testbed 2026-02-19 06:59:04.813074 | orchestrator | 2026-02-19 06:59:04 - wait for servers to be gone 2026-02-19 06:59:17.923552 | orchestrator | 2026-02-19 06:59:17 - clean up ports 2026-02-19 06:59:18.107246 | orchestrator | 2026-02-19 06:59:18 - 0925b299-5d99-4a32-89d2-070570b8d105 2026-02-19 06:59:18.524232 | orchestrator | 2026-02-19 06:59:18 - 2752e1ec-cc68-4ca1-8208-3aec1c83104c 2026-02-19 06:59:18.794598 | orchestrator | 2026-02-19 06:59:18 - 51280aed-5ea9-4780-955b-01cc88a4ac29 2026-02-19 06:59:18.994617 | orchestrator | 2026-02-19 06:59:18 - 8f3836b3-849d-4c60-9b37-d2719366af54 2026-02-19 06:59:19.196573 | orchestrator | 2026-02-19 06:59:19 - 9d3b4695-258d-4005-86a3-16a315a81d60 2026-02-19 06:59:19.416224 | orchestrator | 2026-02-19 06:59:19 - c94ce648-1034-4b19-be6f-d95c149f5abb 2026-02-19 06:59:19.618156 | orchestrator | 2026-02-19 06:59:19 - df019123-e092-41ac-9f71-782d357c3595 2026-02-19 06:59:19.819564 | orchestrator | 2026-02-19 06:59:19 - clean up volumes 2026-02-19 06:59:19.991365 | orchestrator | 2026-02-19 06:59:19 - testbed-volume-manager-base 2026-02-19 06:59:20.032249 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-1-node-base 2026-02-19 06:59:20.071229 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-2-node-base 2026-02-19 06:59:20.114700 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-0-node-base 2026-02-19 06:59:20.157166 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-4-node-base 2026-02-19 06:59:20.197642 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-3-node-base 2026-02-19 06:59:20.239424 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-5-node-base 2026-02-19 06:59:20.283701 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-2-node-5 2026-02-19 06:59:20.325926 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-6-node-3 2026-02-19 06:59:20.369440 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-0-node-3 2026-02-19 06:59:20.414595 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-4-node-4 2026-02-19 06:59:20.459141 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-5-node-5 2026-02-19 06:59:20.503653 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-3-node-3 2026-02-19 06:59:20.546803 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-8-node-5 2026-02-19 06:59:20.592236 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-1-node-4 2026-02-19 06:59:20.636701 | orchestrator | 2026-02-19 06:59:20 - testbed-volume-7-node-4 2026-02-19 06:59:20.678786 | orchestrator | 2026-02-19 06:59:20 - disconnect routers 2026-02-19 06:59:20.803380 | orchestrator | 2026-02-19 06:59:20 - testbed 2026-02-19 06:59:22.447068 | orchestrator | 2026-02-19 06:59:22 - clean up subnets 2026-02-19 06:59:22.503201 | orchestrator | 2026-02-19 06:59:22 - subnet-testbed-management 2026-02-19 06:59:22.742171 | orchestrator | 2026-02-19 06:59:22 - clean up networks 2026-02-19 06:59:22.922157 | orchestrator | 2026-02-19 06:59:22 - net-testbed-management 2026-02-19 06:59:23.349420 | orchestrator | 2026-02-19 06:59:23 - clean up security groups 2026-02-19 06:59:23.391806 | orchestrator | 2026-02-19 06:59:23 - testbed-management 2026-02-19 06:59:23.499739 | orchestrator | 2026-02-19 06:59:23 - testbed-node 2026-02-19 06:59:23.612410 | orchestrator | 2026-02-19 06:59:23 - clean up floating ips 2026-02-19 06:59:23.649459 | orchestrator | 2026-02-19 06:59:23 - 81.163.193.14 2026-02-19 06:59:23.997771 | orchestrator | 2026-02-19 06:59:23 - clean up routers 2026-02-19 06:59:24.119074 | orchestrator | 2026-02-19 06:59:24 - testbed 2026-02-19 06:59:25.260552 | orchestrator | ok: Runtime: 0:00:22.038831 2026-02-19 06:59:25.265059 | 2026-02-19 06:59:25.265263 | PLAY RECAP 2026-02-19 06:59:25.265402 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-19 06:59:25.265464 | 2026-02-19 06:59:25.415791 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-19 06:59:25.418264 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-19 06:59:26.225120 | 2026-02-19 06:59:26.225342 | PLAY [Cleanup play] 2026-02-19 06:59:26.241570 | 2026-02-19 06:59:26.241714 | TASK [Set cloud fact (Zuul deployment)] 2026-02-19 06:59:26.307666 | orchestrator | ok 2026-02-19 06:59:26.317480 | 2026-02-19 06:59:26.317633 | TASK [Set cloud fact (local deployment)] 2026-02-19 06:59:26.352166 | orchestrator | skipping: Conditional result was False 2026-02-19 06:59:26.367894 | 2026-02-19 06:59:26.368034 | TASK [Clean the cloud environment] 2026-02-19 06:59:27.557611 | orchestrator | 2026-02-19 06:59:27 - clean up servers 2026-02-19 06:59:28.024929 | orchestrator | 2026-02-19 06:59:28 - clean up keypairs 2026-02-19 06:59:28.041154 | orchestrator | 2026-02-19 06:59:28 - wait for servers to be gone 2026-02-19 06:59:28.082597 | orchestrator | 2026-02-19 06:59:28 - clean up ports 2026-02-19 06:59:28.158876 | orchestrator | 2026-02-19 06:59:28 - clean up volumes 2026-02-19 06:59:28.221279 | orchestrator | 2026-02-19 06:59:28 - disconnect routers 2026-02-19 06:59:28.248665 | orchestrator | 2026-02-19 06:59:28 - clean up subnets 2026-02-19 06:59:28.270764 | orchestrator | 2026-02-19 06:59:28 - clean up networks 2026-02-19 06:59:28.392022 | orchestrator | 2026-02-19 06:59:28 - clean up security groups 2026-02-19 06:59:28.432832 | orchestrator | 2026-02-19 06:59:28 - clean up floating ips 2026-02-19 06:59:28.458091 | orchestrator | 2026-02-19 06:59:28 - clean up routers 2026-02-19 06:59:28.909539 | orchestrator | ok: Runtime: 0:00:01.319764 2026-02-19 06:59:28.913544 | 2026-02-19 06:59:28.913690 | PLAY RECAP 2026-02-19 06:59:28.913804 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-19 06:59:28.913883 | 2026-02-19 06:59:29.049818 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-19 06:59:29.052385 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-19 06:59:29.794336 | 2026-02-19 06:59:29.794503 | PLAY [Base post-fetch] 2026-02-19 06:59:29.809955 | 2026-02-19 06:59:29.810087 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-19 06:59:29.866280 | orchestrator | skipping: Conditional result was False 2026-02-19 06:59:29.879800 | 2026-02-19 06:59:29.880001 | TASK [fetch-output : Set log path for single node] 2026-02-19 06:59:29.928870 | orchestrator | ok 2026-02-19 06:59:29.937629 | 2026-02-19 06:59:29.937769 | LOOP [fetch-output : Ensure local output dirs] 2026-02-19 06:59:30.420668 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/e59f2d7123e8471ba930463cfd363772/work/logs" 2026-02-19 06:59:30.697466 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e59f2d7123e8471ba930463cfd363772/work/artifacts" 2026-02-19 06:59:30.987553 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e59f2d7123e8471ba930463cfd363772/work/docs" 2026-02-19 06:59:31.014869 | 2026-02-19 06:59:31.015055 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-19 06:59:31.955988 | orchestrator | changed: .d..t...... ./ 2026-02-19 06:59:31.956362 | orchestrator | changed: All items complete 2026-02-19 06:59:31.956422 | 2026-02-19 06:59:32.672455 | orchestrator | changed: .d..t...... ./ 2026-02-19 06:59:33.387087 | orchestrator | changed: .d..t...... ./ 2026-02-19 06:59:33.410304 | 2026-02-19 06:59:33.410433 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-19 06:59:33.443449 | orchestrator | skipping: Conditional result was False 2026-02-19 06:59:33.447117 | orchestrator | skipping: Conditional result was False 2026-02-19 06:59:33.467371 | 2026-02-19 06:59:33.467478 | PLAY RECAP 2026-02-19 06:59:33.467544 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-19 06:59:33.467579 | 2026-02-19 06:59:33.605634 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-19 06:59:33.608116 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-19 06:59:34.401057 | 2026-02-19 06:59:34.401249 | PLAY [Base post] 2026-02-19 06:59:34.420369 | 2026-02-19 06:59:34.420529 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-19 06:59:35.404370 | orchestrator | changed 2026-02-19 06:59:35.415531 | 2026-02-19 06:59:35.415817 | PLAY RECAP 2026-02-19 06:59:35.415999 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-19 06:59:35.416207 | 2026-02-19 06:59:35.546436 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-19 06:59:35.548928 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-19 06:59:36.338450 | 2026-02-19 06:59:36.338625 | PLAY [Base post-logs] 2026-02-19 06:59:36.349248 | 2026-02-19 06:59:36.349375 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-19 06:59:36.805498 | localhost | changed 2026-02-19 06:59:36.822589 | 2026-02-19 06:59:36.822785 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-19 06:59:36.861270 | localhost | ok 2026-02-19 06:59:36.868565 | 2026-02-19 06:59:36.868761 | TASK [Set zuul-log-path fact] 2026-02-19 06:59:36.888032 | localhost | ok 2026-02-19 06:59:36.903661 | 2026-02-19 06:59:36.903788 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-19 06:59:36.941558 | localhost | ok 2026-02-19 06:59:36.948764 | 2026-02-19 06:59:36.948912 | TASK [upload-logs : Create log directories] 2026-02-19 06:59:37.453129 | localhost | changed 2026-02-19 06:59:37.456000 | 2026-02-19 06:59:37.456116 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-19 06:59:37.967433 | localhost -> localhost | ok: Runtime: 0:00:00.007255 2026-02-19 06:59:37.976330 | 2026-02-19 06:59:37.976511 | TASK [upload-logs : Upload logs to log server] 2026-02-19 06:59:38.531312 | localhost | Output suppressed because no_log was given 2026-02-19 06:59:38.533307 | 2026-02-19 06:59:38.533413 | LOOP [upload-logs : Compress console log and json output] 2026-02-19 06:59:38.584755 | localhost | skipping: Conditional result was False 2026-02-19 06:59:38.589904 | localhost | skipping: Conditional result was False 2026-02-19 06:59:38.605323 | 2026-02-19 06:59:38.605567 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-19 06:59:38.653169 | localhost | skipping: Conditional result was False 2026-02-19 06:59:38.653778 | 2026-02-19 06:59:38.657408 | localhost | skipping: Conditional result was False 2026-02-19 06:59:38.670776 | 2026-02-19 06:59:38.671020 | LOOP [upload-logs : Upload console log and json output]