Flush command for multipath failed with the message “map in use”

The problem occurred on a cloud deployed on Astra Linux 1.7, Cinder on a storage system with multipath.

When performing certain actions on instance that has connected volumes, the action ends with an error and the instance goes into the “Error” state.

An error message containing the following text appears in the description of the virtual machine and in the nova-compute log on the computing node:

Unexpected error while running command. Command: multipath -f 36fc73fb10023e21a5cf7c154000001af Exit code: 1 Stdout:
'' Stderr: '586.802730 | 36fc73fb10023e21a5cf7c154000001af-part2: map in use\n'

As you can see from this message, the command to flush mutlipath connections to a network block device failed because the device was busy.

In this case, the device partitions are not mounted into the operating system of the computing node and there are no open files along the device path. A likely cause could be that the network block devices (which are virtual machine disks) contain LVM metadata. The operating system of the computing node scans the block devices connected to it for the presence of this metadata and connects the LVs found in this way through a device mapper.

You can verify this by looking at the output of the pvdisplay command. Disk partitions to which connections could not be reset will be listed as PV.

To “free” partitions, you must run the dmsetup remove command for each of the LVs associated with the VG, of which the discovered PVs are members.

Run the lvs command to view a list of LVs. The device name in device mapper consists of the names VG and LV, separated by a hyphen (the hyphens in the names themselves are doubled).

Example:

In the pvdisplay output, we found that the instance block device partition /dev/mapper/36fc73fb10023e21a5cf7c154000001af-part5 is a PV that has VG membership volgroup-one. Next, from the output of the lvs command, we can see that the LV logicalvol-root is present in the VG volgroup-one. We will delete it from the device mapper:

dmsetup remove /dev/mapper/volgroup--one-logicalvol--root

After this, the command multipath -f 36fc73fb10023e21a5cf7c154000001af should run smoothly.

To restore the operation of instance after deleting lv from the device mapper, you should reset its state to “Active” and perform a “Hard” reboot.

To prevent the error from appearing in the future, you should configure filters in LVM on the host.