Filtered by vendor Linux Subscriptions
Filtered by product Linux Kernel Subscriptions
Total 12850 CVE
CVE Vendors Products Updated CVSS v3.1
CVE-2022-49121 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: scsi: pm8001: Fix tag leaks on error In pm8001_chip_set_dev_state_req(), pm8001_chip_fw_flash_update_req(), pm80xx_chip_phy_ctl_req() and pm8001_chip_reg_dev_req() add missing calls to pm8001_tag_free() to free the allocated tag when pm8001_mpi_build_cmd() fails. Similarly, in pm8001_exec_internal_task_abort(), if the chip ->task_abort method fails, the tag allocated for the abort request task must be freed. Add the missing call to pm8001_tag_free().
CVE-2023-53019 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: net: mdio: validate parameter addr in mdiobus_get_phy() The caller may pass any value as addr, what may result in an out-of-bounds access to array mdio_map. One existing case is stmmac_init_phy() that may pass -1 as addr. Therefore validate addr before using it.
CVE-2024-35793 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: debugfs: fix wait/cancellation handling during remove Ben Greear further reports deadlocks during concurrent debugfs remove while files are being accessed, even though the code in question now uses debugfs cancellations. Turns out that despite all the review on the locking, we missed completely that the logic is wrong: if the refcount hits zero we can finish (and need not wait for the completion), but if it doesn't we have to trigger all the cancellations. As written, we can _never_ get into the loop triggering the cancellations. Fix this, and explain it better while at it.
CVE-2024-56552 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: drm/xe/guc_submit: fix race around suspend_pending Currently in some testcases we can trigger: xe 0000:03:00.0: [drm] Assertion `exec_queue_destroyed(q)` failed! .... WARNING: CPU: 18 PID: 2640 at drivers/gpu/drm/xe/xe_guc_submit.c:1826 xe_guc_sched_done_handler+0xa54/0xef0 [xe] xe 0000:03:00.0: [drm] *ERROR* GT1: DEREGISTER_DONE: Unexpected engine state 0x00a1, guc_id=57 Looking at a snippet of corresponding ftrace for this GuC id we can see: 162.673311: xe_sched_msg_add: dev=0000:03:00.0, gt=1 guc_id=57, opcode=3 162.673317: xe_sched_msg_recv: dev=0000:03:00.0, gt=1 guc_id=57, opcode=3 162.673319: xe_exec_queue_scheduling_disable: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0x29, flags=0x0 162.674089: xe_exec_queue_kill: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0x29, flags=0x0 162.674108: xe_exec_queue_close: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0xa9, flags=0x0 162.674488: xe_exec_queue_scheduling_done: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0xa9, flags=0x0 162.678452: xe_exec_queue_deregister: dev=0000:03:00.0, 1:0x2, gt=1, width=1, guc_id=57, guc_state=0xa1, flags=0x0 It looks like we try to suspend the queue (opcode=3), setting suspend_pending and triggering a disable_scheduling. The user then closes the queue. However the close will also forcefully signal the suspend fence after killing the queue, later when the G2H response for disable_scheduling comes back we have now cleared suspend_pending when signalling the suspend fence, so the disable_scheduling now incorrectly tries to also deregister the queue. This leads to warnings since the queue has yet to even be marked for destruction. We also seem to trigger errors later with trying to double unregister the same queue. To fix this tweak the ordering when handling the response to ensure we don't race with a disable_scheduling that didn't actually intend to perform an unregister. The destruction path should now also correctly wait for any pending_disable before marking as destroyed. (cherry picked from commit f161809b362f027b6d72bd998e47f8f0bad60a2e)
CVE-2024-42130 1 Linux 1 Linux Kernel 2025-07-13 5.6 Medium
In the Linux kernel, the following vulnerability has been resolved: nfc/nci: Add the inconsistency check between the input data length and count write$nci(r0, &(0x7f0000000740)=ANY=[@ANYBLOB="610501"], 0xf) Syzbot constructed a write() call with a data length of 3 bytes but a count value of 15, which passed too little data to meet the basic requirements of the function nci_rf_intf_activated_ntf_packet(). Therefore, increasing the comparison between data length and count value to avoid problems caused by inconsistent data length and count.
CVE-2023-52654 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: io_uring/af_unix: disable sending io_uring over sockets File reference cycles have caused lots of problems for io_uring in the past, and it still doesn't work exactly right and races with unix_stream_read_generic(). The safest fix would be to completely disallow sending io_uring files via sockets via SCM_RIGHT, so there are no possible cycles invloving registered files and thus rendering SCM accounting on the io_uring side unnecessary.
CVE-2024-41050 1 Linux 1 Linux Kernel 2025-07-13 4.7 Medium
In the Linux kernel, the following vulnerability has been resolved: cachefiles: cyclic allocation of msg_id to avoid reuse Reusing the msg_id after a maliciously completed reopen request may cause a read request to remain unprocessed and result in a hung, as shown below: t1 | t2 | t3 ------------------------------------------------- cachefiles_ondemand_select_req cachefiles_ondemand_object_is_close(A) cachefiles_ondemand_set_object_reopening(A) queue_work(fscache_object_wq, &info->work) ondemand_object_worker cachefiles_ondemand_init_object(A) cachefiles_ondemand_send_req(OPEN) // get msg_id 6 wait_for_completion(&req_A->done) cachefiles_ondemand_daemon_read // read msg_id 6 req_A cachefiles_ondemand_get_fd copy_to_user // Malicious completion msg_id 6 copen 6,-1 cachefiles_ondemand_copen complete(&req_A->done) // will not set the object to close // because ondemand_id && fd is valid. // ondemand_object_worker() is done // but the object is still reopening. // new open req_B cachefiles_ondemand_init_object(B) cachefiles_ondemand_send_req(OPEN) // reuse msg_id 6 process_open_req copen 6,A.size // The expected failed copen was executed successfully Expect copen to fail, and when it does, it closes fd, which sets the object to close, and then close triggers reopen again. However, due to msg_id reuse resulting in a successful copen, the anonymous fd is not closed until the daemon exits. Therefore read requests waiting for reopen to complete may trigger hung task. To avoid this issue, allocate the msg_id cyclically to avoid reusing the msg_id for a very short duration of time.
CVE-2022-49133 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: drm/amdkfd: svm range restore work deadlock when process exit kfd_process_notifier_release flush svm_range_restore_work which calls svm_range_list_lock_and_flush_work to flush deferred_list work, but if deferred_list work mmput release the last user, it will call exit_mmap -> notifier_release, it is deadlock with below backtrace. Move flush svm_range_restore_work to kfd_process_wq_release to avoid deadlock. Then svm_range_restore_work take task->mm ref to avoid mm is gone while validating and mapping ranges to GPU. Workqueue: events svm_range_deferred_list_work [amdgpu] Call Trace: wait_for_completion+0x94/0x100 __flush_work+0x12a/0x1e0 __cancel_work_timer+0x10e/0x190 cancel_delayed_work_sync+0x13/0x20 kfd_process_notifier_release+0x98/0x2a0 [amdgpu] __mmu_notifier_release+0x74/0x1f0 exit_mmap+0x170/0x200 mmput+0x5d/0x130 svm_range_deferred_list_work+0x104/0x230 [amdgpu] process_one_work+0x220/0x3c0
CVE-2024-53148 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: comedi: Flush partial mappings in error case If some remap_pfn_range() calls succeeded before one failed, we still have buffer pages mapped into the userspace page tables when we drop the buffer reference with comedi_buf_map_put(bm). The userspace mappings are only cleaned up later in the mmap error path. Fix it by explicitly flushing all mappings in our VMA on the error path. See commit 79a61cc3fc04 ("mm: avoid leaving partial pfn mappings around in error case").
CVE-2024-57928 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: netfs: Fix enomem handling in buffered reads If netfs_read_to_pagecache() gets an error from either ->prepare_read() or from netfs_prepare_read_iterator(), it needs to decrement ->nr_outstanding, cancel the subrequest and break out of the issuing loop. Currently, it only does this for two of the cases, but there are two more that aren't handled. Fix this by moving the handling to a common place and jumping to it from all four places. This is in preference to inserting a wrapper around netfs_prepare_read_iterator() as proposed by Dmitry Antipov[1].
CVE-2022-49363 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: f2fs: fix to do sanity check on block address in f2fs_do_zero_range() As Yanming reported in bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215894 I have encountered a bug in F2FS file system in kernel v5.17. I have uploaded the system call sequence as case.c, and a fuzzed image can be found in google net disk The kernel should enable CONFIG_KASAN=y and CONFIG_KASAN_INLINE=y. You can reproduce the bug by running the following commands: kernel BUG at fs/f2fs/segment.c:2291! Call Trace: f2fs_invalidate_blocks+0x193/0x2d0 f2fs_fallocate+0x2593/0x4a70 vfs_fallocate+0x2a5/0xac0 ksys_fallocate+0x35/0x70 __x64_sys_fallocate+0x8e/0xf0 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae The root cause is, after image was fuzzed, block mapping info in inode will be inconsistent with SIT table, so in f2fs_fallocate(), it will cause panic when updating SIT with invalid blkaddr. Let's fix the issue by adding sanity check on block address before updating SIT table with it.
CVE-2022-49112 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: mt76: fix monitor mode crash with sdio driver mt7921s driver may receive frames with fragment buffers. If there is a CTS packet received in monitor mode, the payload is 10 bytes only and need 6 bytes header padding after RXD buffer. However, only RXD in the first linear buffer, if we pull buffer size RXD-size+6 bytes with skb_pull(), that would trigger "BUG_ON(skb->len < skb->data_len)" in __skb_pull(). To avoid the nonlinear buffer issue, enlarge the RXD size from 128 to 256 to make sure all MCU operation in linear buffer. [ 52.007562] kernel BUG at include/linux/skbuff.h:2313! [ 52.007578] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [ 52.007987] pc : skb_pull+0x48/0x4c [ 52.008015] lr : mt7921_queue_rx_skb+0x494/0x890 [mt7921_common] [ 52.008361] Call trace: [ 52.008377] skb_pull+0x48/0x4c [ 52.008400] mt76s_net_worker+0x134/0x1b0 [mt76_sdio 35339a92c6eb7d4bbcc806a1d22f56365565135c] [ 52.008431] __mt76_worker_fn+0xe8/0x170 [mt76 ef716597d11a77150bc07e3fdd68eeb0f9b56917] [ 52.008449] kthread+0x148/0x3ac [ 52.008466] ret_from_fork+0x10/0x30
CVE-2024-41075 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: cachefiles: add consistency check for copen/cread This prevents malicious processes from completing random copen/cread requests and crashing the system. Added checks are listed below: * Generic, copen can only complete open requests, and cread can only complete read requests. * For copen, ondemand_id must not be 0, because this indicates that the request has not been read by the daemon. * For cread, the object corresponding to fd and req should be the same.
CVE-2024-57889 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: pinctrl: mcp23s08: Fix sleeping in atomic context due to regmap locking If a device uses MCP23xxx IO expander to receive IRQs, the following bug can happen: BUG: sleeping function called from invalid context at kernel/locking/mutex.c:283 in_atomic(): 1, irqs_disabled(): 1, non_block: 0, ... preempt_count: 1, expected: 0 ... Call Trace: ... __might_resched+0x104/0x10e __might_sleep+0x3e/0x62 mutex_lock+0x20/0x4c regmap_lock_mutex+0x10/0x18 regmap_update_bits_base+0x2c/0x66 mcp23s08_irq_set_type+0x1ae/0x1d6 __irq_set_trigger+0x56/0x172 __setup_irq+0x1e6/0x646 request_threaded_irq+0xb6/0x160 ... We observed the problem while experimenting with a touchscreen driver which used MCP23017 IO expander (I2C). The regmap in the pinctrl-mcp23s08 driver uses a mutex for protection from concurrent accesses, which is the default for regmaps without .fast_io, .disable_locking, etc. mcp23s08_irq_set_type() calls regmap_update_bits_base(), and the latter locks the mutex. However, __setup_irq() locks desc->lock spinlock before calling these functions. As a result, the system tries to lock the mutex whole holding the spinlock. It seems, the internal regmap locks are not needed in this driver at all. mcp->lock seems to protect the regmap from concurrent accesses already, except, probably, in mcp_pinconf_get/set. mcp23s08_irq_set_type() and mcp23s08_irq_mask/unmask() are called under chip_bus_lock(), which calls mcp23s08_irq_bus_lock(). The latter takes mcp->lock and enables regmap caching, so that the potentially slow I2C accesses are deferred until chip_bus_unlock(). The accesses to the regmap from mcp23s08_probe_one() do not need additional locking. In all remaining places where the regmap is accessed, except mcp_pinconf_get/set(), the driver already takes mcp->lock. This patch adds locking in mcp_pinconf_get/set() and disables internal locking in the regmap config. Among other things, it fixes the sleeping in atomic context described above.
CVE-2023-53087 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: drm/i915/active: Fix misuse of non-idle barriers as fence trackers Users reported oopses on list corruptions when using i915 perf with a number of concurrently running graphics applications. Root cause analysis pointed at an issue in barrier processing code -- a race among perf open / close replacing active barriers with perf requests on kernel context and concurrent barrier preallocate / acquire operations performed during user context first pin / last unpin. When adding a request to a composite tracker, we try to reuse an existing fence tracker, already allocated and registered with that composite. The tracker we obtain may already track another fence, may be an idle barrier, or an active barrier. If the tracker we get occurs a non-idle barrier then we try to delete that barrier from a list of barrier tasks it belongs to. However, while doing that we don't respect return value from a function that performs the barrier deletion. Should the deletion ever fail, we would end up reusing the tracker still registered as a barrier task. Since the same structure field is reused with both fence callback lists and barrier tasks list, list corruptions would likely occur. Barriers are now deleted from a barrier tasks list by temporarily removing the list content, traversing that content with skip over the node to be deleted, then populating the list back with the modified content. Should that intentionally racy concurrent deletion attempts be not serialized, one or more of those may fail because of the list being temporary empty. Related code that ignores the results of barrier deletion was initially introduced in v5.4 by commit d8af05ff38ae ("drm/i915: Allow sharing the idle-barrier from other kernel requests"). However, all users of the barrier deletion routine were apparently serialized at that time, then the issue didn't exhibit itself. Results of git bisect with help of a newly developed igt@gem_barrier_race@remote-request IGT test indicate that list corruptions might start to appear after commit 311770173fac ("drm/i915/gt: Schedule request retirement when timeline idles"), introduced in v5.5. Respect results of barrier deletion attempts -- mark the barrier as idle only if successfully deleted from the list. Then, before proceeding with setting our fence as the one currently tracked, make sure that the tracker we've got is not a non-idle barrier. If that check fails then don't use that tracker but go back and try to acquire a new, usable one. v3: use unlikely() to document what outcome we expect (Andi), - fix bad grammar in commit description. v2: no code changes, - blame commit 311770173fac ("drm/i915/gt: Schedule request retirement when timeline idles"), v5.5, not commit d8af05ff38ae ("drm/i915: Allow sharing the idle-barrier from other kernel requests"), v5.4, - reword commit description. (cherry picked from commit 506006055769b10d1b2b4e22f636f3b45e0e9fc7)
CVE-2024-57936 1 Linux 1 Linux Kernel 2025-07-13 4.4 Medium
In the Linux kernel, the following vulnerability has been resolved: RDMA/bnxt_re: Fix max SGEs for the Work Request Gen P7 supports up to 13 SGEs for now. WQE software structure can hold only 6 now. Since the max send sge is reported as 13, the stack can give requests up to 13 SGEs. This is causing traffic failures and system crashes. Use the define for max SGE supported for variable size. This will work for both static and variable WQEs.
CVE-2024-57904 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: iio: adc: at91: call input_free_device() on allocated iio_dev Current implementation of at91_ts_register() calls input_free_deivce() on st->ts_input, however, the err label can be reached before the allocated iio_dev is stored to st->ts_input. Thus call input_free_device() on input instead of st->ts_input.
CVE-2025-38575 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: ksmbd: use aead_request_free to match aead_request_alloc Use aead_request_free() instead of kfree() to properly free memory allocated by aead_request_alloc(). This ensures sensitive crypto data is zeroed before being freed.
CVE-2023-53106 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: nfc: st-nci: Fix use after free bug in ndlc_remove due to race condition This bug influences both st_nci_i2c_remove and st_nci_spi_remove. Take st_nci_i2c_remove as an example. In st_nci_i2c_probe, it called ndlc_probe and bound &ndlc->sm_work with llt_ndlc_sm_work. When it calls ndlc_recv or timeout handler, it will finally call schedule_work to start the work. When we call st_nci_i2c_remove to remove the driver, there may be a sequence as follows: Fix it by finishing the work before cleanup in ndlc_remove CPU0 CPU1 |llt_ndlc_sm_work st_nci_i2c_remove | ndlc_remove | st_nci_remove | nci_free_device| kfree(ndev) | //free ndlc->ndev | |llt_ndlc_rcv_queue |nci_recv_frame |//use ndlc->ndev
CVE-2024-27006 1 Linux 1 Linux Kernel 2025-07-13 5.5 Medium
In the Linux kernel, the following vulnerability has been resolved: thermal/debugfs: Add missing count increment to thermal_debug_tz_trip_up() The count field in struct trip_stats, representing the number of times the zone temperature was above the trip point, needs to be incremented in thermal_debug_tz_trip_up(), for two reasons. First, if a trip point is crossed on the way up for the first time, thermal_debug_update_temp() called from update_temperature() does not see it because it has not been added to trips_crossed[] array in the thermal zone's struct tz_debugfs object yet. Therefore, when thermal_debug_tz_trip_up() is called after that, the trip point's count value is 0, and the attempt to divide by it during the average temperature computation leads to a divide error which causes the kernel to crash. Setting the count to 1 before the division by incrementing it fixes this problem. Second, if a trip point is crossed on the way up, but it has been crossed on the way up already before, its count value needs to be incremented to make a record of the fact that the zone temperature is above the trip now. Without doing that, if the mitigations applied after crossing the trip cause the zone temperature to drop below its threshold, the count will not be updated for this episode at all and the average temperature in the trip statistics record will be somewhat higher than it should be. Cc :6.8+ <[email protected]> # 6.8+