A user with the vm-admin role can write arbitrary key-value pairs to VDI.sm_config and SR.sm_config via the XAPI management API. These fields store internal storage driver state - VHD chain metadata, GC control flags, encryption key hashes, and SR provisioning parameters. XAPI performs zero validation on written values and never notifies the storage backend of changes. The SM driver discovers the corruption only when it next reads sm_config during a routine operation and acts on the attacker's data as authoritative internal state. The hypervisor becomes a silent proxy, forwarding corrupted metadata to the storage subsystem through the trusted management channel. The resulting storage operations are indistinguishable from legitimate activity.
VDI.sm_config and SR.sm_config are Map(String, String) fields in the XAPI data model. SM (Storage Manager) drivers store their internal operational state in these fields: VHD parent-child chain metadata (vhd-parent), VDI type information (vdi_type), garbage collector control flags (paused, activating, relinking, vhd-blocks), encryption key hashes (key_hash), and SR provisioning parameters (allocation).
The write path is a pure database operation:
vm-admin calls VDI.add_to_sm_config(vdi, "vhd-parent", "<attacker-controlled-uuid>")The attack is entirely silent. No audit log, no XAPI event, no storage-side indicator. The write is a database operation that the system treats as routine configuration.
VDI.add_to_sm_config(), VDI.set_sm_config() - pure database writes with zero dispatch to the storage backendFileSR.py, LVHDSR.py, LinstorSR.py read sm_config values via SRCommand.parse() and use them directly in storage operationscleanup.py reads vhd-parent, paused, activating, relinking, vhd-blocks to make coalesce and deletion decisionsZero write-time validation. VDI.add_to_sm_config() and VDI.set_sm_config() perform pure database writes. No value is checked, no backend is consulted, no event is emitted.
RBAC gap. vm-admin has full write access to VDI.sm_config. The MRO guard in the xe CLI produces a warning but does not block the write. XenAPI clients (Xen Orchestra, CloudStack, custom scripts) bypass xe entirely.
Backend blind trust. SM drivers read sm_config values and use them directly in storage operations without independent validation against on-disk state. The auto-heal mechanism (_override_sm_config) only runs during SR scan - not on every read.
Protected-key persistence. Keys in smconfig_protected_keys (paused, activating, relinking, vhd-blocks) are never auto-healed. Once injected, they persist until manually removed.
| Driver | SR Types | Critical Keys | Impact |
|---|---|---|---|
| LVHDSR | lvmoiscsi, lvmohba, lvmofcoe, lvm | vdi_type, vhd-parent, paused, host_* |
KeyError crash (entire SR offline), GC misdirection, chain severance |
| FileSR | ext, nfs, cephfs, glusterfs, smb | vhd-parent, key_hash, type |
Chain severance, encryption lockout, I/O path confusion |
| LinstorSR | linstor | vdi_type, host_* |
KeyError crash, split-brain across DRBD nodes |
| RawISCSISR | rawhba | SCSIid, LUNid |
Cross-VM data access via LUN redirection |
Seven scenarios are confirmed with live evidence across NFS, iSCSI, and EXT SR types. Each scenario has a dedicated PoC script and committed evidence logs.
| Scenario | Impact | Status |
|---|---|---|
| S2: VHD chain severance | Break snapshot chains; GC deletes orphaned VHDs. Data loss. | ALL PASS (NFS, iSCSI, EXT) |
| S3: VDI type confusion | Wipe vdi_type key; KeyError crash takes entire SR offline for all VMs |
ALL PASS (iSCSI) |
| S4: GC misdirection | Poison vhd-parent to point at different VDI; GC deletes or coalesces wrong data |
ALL PASS (NFS, iSCSI, EXT) |
| S8: SR config corruption | Replace SR-level sm_config; driver crashes on next VDI creation. No auto-heal for SR metadata. |
ALL PASS (iSCSI) |
| S9: Pause state injection | Inject paused=true; GC skips VDI indefinitely. Key persists forever. |
ALL PASS (NFS, iSCSI) |
| S10: GC state injection | Inject GC-internal keys (activating, relinking); GC enters spin-wait loops, storage saturates |
ALL PASS (NFS, iSCSI) |
| S11: Type race window | Exploit _determineType() race between sr-scan auto-heal and next read |
ALL PASS (iSCSI) |
On iSCSI/LVM-backed SRs, the attack is undetectable after the fact. The vhd-util query tool fails on snapshot-created LVs because the LVs cannot be independently activated. Defenders cannot compare XAPI metadata against on-disk VHD headers, cannot confirm whether vhd-parent values are correct, and cannot assess blast radius after a suspected compromise.
sm_config writes by non-SM sessions for driver-internal keysvdi_type values not in {aVHD, lVHD, raw}vhd-parent UUID mismatches against on-disk VHD headers (NFS/EXT SRs only - not possible on iSCSI)paused=true on VDIs with no active coalesce operationactivating or relinking keys present outside GC cyclesdisclosure/vendor-detection-guidance.mdvm-admin role grants to trusted personnel onlysm_config writes via XAPI event stream (requires custom tooling)sm_config on all VDIs for unexpected keysLayer 1 - Key Protection: Reject writes to known driver-internal keys (vhd-parent, vdi_type, paused, activating, relinking, vhd-blocks, key_hash, host_*, SCSIid, LUNid, allocation) unless the caller is an SM driver internal session. Implementation location: ocaml/xapi/xapi_vdi.ml.
Layer 2 - RBAC Tightening: Restrict VDI.add_to_sm_config, VDI.set_sm_config, and equivalents to pool-admin or a new storage-admin role.
Layer 3 - Driver-Side Validation: Each SM driver should validate sm_config values against on-disk state before using them, on every read - not just on SR scan.
Layer 4 - Audit Logging: Log all sm_config write operations to the XAPI audit trail with full session identity.
Upstream patches exist. They are held privately pending coordinated disclosure.
Disclosure:
xapi_vdi.ml (write path), FileSR.py, LVHDSR.py, LinstorSR.py, cleanup.py (consumers)disclosure/advisories/smc-1-security-advisory.mdresearch/smc-1/xapi-sm-config-vulnerability.mdresearch/smc-1/poc/ (available to CSIRTs on request)research/smc-1/poc/evidence/Discovered and reported by Jakob Wolffhechel, Moksha.