This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
History

Thu, 12 Dec 2024 16:45:00 +0000


Thu, 12 Dec 2024 15:45:00 +0000

Type Values Removed Values Added
Description In the Linux kernel, the following vulnerability has been resolved: nvme: make keep-alive synchronous operation The nvme keep-alive operation, which executes at a periodic interval, could potentially sneak in while shutting down a fabric controller. This may lead to a race between the fabric controller admin queue destroy code path (invoked while shutting down controller) and hw/hctx queue dispatcher called from the nvme keep-alive async request queuing operation. This race could lead to the kernel crash shown below: Call Trace: autoremove_wake_function+0x0/0xbc (unreliable) __blk_mq_sched_dispatch_requests+0x114/0x24c blk_mq_sched_dispatch_requests+0x44/0x84 blk_mq_run_hw_queue+0x140/0x220 nvme_keep_alive_work+0xc8/0x19c [nvme_core] process_one_work+0x200/0x4e0 worker_thread+0x340/0x504 kthread+0x138/0x140 start_kernel_thread+0x14/0x18 While shutting down fabric controller, if nvme keep-alive request sneaks in then it would be flushed off. The nvme_keep_alive_end_io function is then invoked to handle the end of the keep-alive operation which decrements the admin->q_usage_counter and assuming this is the last/only request in the admin queue then the admin->q_usage_counter becomes zero. If that happens then blk-mq destroy queue operation (blk_mq_destroy_ queue()) which could be potentially running simultaneously on another cpu (as this is the controller shutdown code path) would forward progress and deletes the admin queue. So, now from this point onward we are not supposed to access the admin queue resources. However the issue here's that the nvme keep-alive thread running hw/hctx queue dispatch operation hasn't yet finished its work and so it could still potentially access the admin queue resource while the admin queue had been already deleted and that causes the above crash. This fix helps avoid the observed crash by implementing keep-alive as a synchronous operation so that we decrement admin->q_usage_counter only after keep-alive command finished its execution and returns the command status back up to its caller (blk_execute_rq()). This would ensure that fabric shutdown code path doesn't destroy the fabric admin queue until keep-alive request finished execution and also keep-alive thread is not running hw/hctx queue dispatch operation. This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
Title nvme: make keep-alive synchronous operation kernel: nvme: make keep-alive synchronous operation

Wed, 04 Dec 2024 02:15:00 +0000

Type Values Removed Values Added
Weaknesses CWE-362
References
Metrics threat_severity

None

cvssV3_1

{'score': 4.7, 'vector': 'CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H'}

threat_severity

Moderate


Mon, 25 Nov 2024 21:30:00 +0000

Type Values Removed Values Added
Description In the Linux kernel, the following vulnerability has been resolved: nvme: make keep-alive synchronous operation The nvme keep-alive operation, which executes at a periodic interval, could potentially sneak in while shutting down a fabric controller. This may lead to a race between the fabric controller admin queue destroy code path (invoked while shutting down controller) and hw/hctx queue dispatcher called from the nvme keep-alive async request queuing operation. This race could lead to the kernel crash shown below: Call Trace: autoremove_wake_function+0x0/0xbc (unreliable) __blk_mq_sched_dispatch_requests+0x114/0x24c blk_mq_sched_dispatch_requests+0x44/0x84 blk_mq_run_hw_queue+0x140/0x220 nvme_keep_alive_work+0xc8/0x19c [nvme_core] process_one_work+0x200/0x4e0 worker_thread+0x340/0x504 kthread+0x138/0x140 start_kernel_thread+0x14/0x18 While shutting down fabric controller, if nvme keep-alive request sneaks in then it would be flushed off. The nvme_keep_alive_end_io function is then invoked to handle the end of the keep-alive operation which decrements the admin->q_usage_counter and assuming this is the last/only request in the admin queue then the admin->q_usage_counter becomes zero. If that happens then blk-mq destroy queue operation (blk_mq_destroy_ queue()) which could be potentially running simultaneously on another cpu (as this is the controller shutdown code path) would forward progress and deletes the admin queue. So, now from this point onward we are not supposed to access the admin queue resources. However the issue here's that the nvme keep-alive thread running hw/hctx queue dispatch operation hasn't yet finished its work and so it could still potentially access the admin queue resource while the admin queue had been already deleted and that causes the above crash. This fix helps avoid the observed crash by implementing keep-alive as a synchronous operation so that we decrement admin->q_usage_counter only after keep-alive command finished its execution and returns the command status back up to its caller (blk_execute_rq()). This would ensure that fabric shutdown code path doesn't destroy the fabric admin queue until keep-alive request finished execution and also keep-alive thread is not running hw/hctx queue dispatch operation.
Title nvme: make keep-alive synchronous operation
References

cve-icon MITRE

Status: REJECTED

Assigner: Linux

Published: 2024-11-25T21:21:29.297Z

Updated: 2024-12-12T15:20:41.007Z

Reserved: 2024-11-19T17:17:24.984Z

Link: CVE-2024-53102

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Rejected

Published: 2024-11-25T22:15:17.553

Modified: 2024-12-12T16:15:54.967

Link: CVE-2024-53102

cve-icon Redhat

Severity : Moderate

Publid Date: 2024-11-25T00:00:00Z

Links: CVE-2024-53102 - Bugzilla