Filtered by vendor
Subscriptions
Total
346 CVE
CVE | Vendors | Products | Updated | CVSS v3.1 |
---|---|---|---|---|
CVE-2022-49156 | 1 Redhat | 1 Enterprise Linux | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: scsi: qla2xxx: Fix scheduling while atomic The driver makes a call into midlayer (fc_remote_port_delete) which can put the thread to sleep. The thread that originates the call is in interrupt context. The combination of the two trigger a crash. Schedule the call in non-interrupt context where it is more safe. kernel: BUG: scheduling while atomic: swapper/7/0/0x00010000 kernel: Call Trace: kernel: <IRQ> kernel: dump_stack+0x66/0x81 kernel: __schedule_bug.cold.90+0x5/0x1d kernel: __schedule+0x7af/0x960 kernel: schedule+0x28/0x80 kernel: schedule_timeout+0x26d/0x3b0 kernel: wait_for_completion+0xb4/0x140 kernel: ? wake_up_q+0x70/0x70 kernel: __wait_rcu_gp+0x12c/0x160 kernel: ? sdev_evt_alloc+0xc0/0x180 [scsi_mod] kernel: synchronize_sched+0x6c/0x80 kernel: ? call_rcu_bh+0x20/0x20 kernel: ? __bpf_trace_rcu_invoke_callback+0x10/0x10 kernel: sdev_evt_alloc+0xfd/0x180 [scsi_mod] kernel: starget_for_each_device+0x85/0xb0 [scsi_mod] kernel: ? scsi_init_io+0x360/0x3d0 [scsi_mod] kernel: scsi_init_io+0x388/0x3d0 [scsi_mod] kernel: device_for_each_child+0x54/0x90 kernel: fc_remote_port_delete+0x70/0xe0 [scsi_transport_fc] kernel: qla2x00_schedule_rport_del+0x62/0xf0 [qla2xxx] kernel: qla2x00_mark_device_lost+0x9c/0xd0 [qla2xxx] kernel: qla24xx_handle_plogi_done_event+0x55f/0x570 [qla2xxx] kernel: qla2x00_async_login_sp_done+0xd2/0x100 [qla2xxx] kernel: qla24xx_logio_entry+0x13a/0x3c0 [qla2xxx] kernel: qla24xx_process_response_queue+0x306/0x400 [qla2xxx] kernel: qla24xx_msix_rsp_q+0x3f/0xb0 [qla2xxx] kernel: __handle_irq_event_percpu+0x40/0x180 kernel: handle_irq_event_percpu+0x30/0x80 kernel: handle_irq_event+0x36/0x60 | ||||
CVE-2022-49124 | 1 Redhat | 1 Enterprise Linux | 2025-05-04 | 4.4 Medium |
In the Linux kernel, the following vulnerability has been resolved: x86/mce: Work around an erratum on fast string copy instructions A rare kernel panic scenario can happen when the following conditions are met due to an erratum on fast string copy instructions: 1) An uncorrected error. 2) That error must be in first cache line of a page. 3) Kernel must execute page_copy from the page immediately before that page. The fast string copy instructions ("REP; MOVS*") could consume an uncorrectable memory error in the cache line _right after_ the desired region to copy and raise an MCE. Bit 0 of MSR_IA32_MISC_ENABLE can be cleared to disable fast string copy and will avoid such spurious machine checks. However, that is less preferable due to the permanent performance impact. Considering memory poison is rare, it's desirable to keep fast string copy enabled until an MCE is seen. Intel has confirmed the following: 1. The CPU erratum of fast string copy only applies to Skylake, Cascade Lake and Cooper Lake generations. Directly return from the MCE handler: 2. Will result in complete execution of the "REP; MOVS*" with no data loss or corruption. 3. Will not result in another MCE firing on the next poisoned cache line due to "REP; MOVS*". 4. Will resume execution from a correct point in code. 5. Will result in the same instruction that triggered the MCE firing a second MCE immediately for any other software recoverable data fetch errors. 6. Is not safe without disabling the fast string copy, as the next fast string copy of the same buffer on the same CPU would result in a PANIC MCE. This should mitigate the erratum completely with the only caveat that the fast string copy is disabled on the affected hyper thread thus performance degradation. This is still better than the OS crashing on MCEs raised on an irrelevant process due to "REP; MOVS*' accesses in a kernel context, e.g., copy_page. Injected errors on 1st cache line of 8 anonymous pages of process 'proc1' and observed MCE consumption from 'proc2' with no panic (directly returned). Without the fix, the host panicked within a few minutes on a random 'proc2' process due to kernel access from copy_page. [ bp: Fix comment style + touch ups, zap an unlikely(), improve the quirk function's readability. ] | ||||
CVE-2022-49004 | 1 Linux | 1 Linux Kernel | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: riscv: Sync efi page table's kernel mappings before switching The EFI page table is initially created as a copy of the kernel page table. With VMAP_STACK enabled, kernel stacks are allocated in the vmalloc area: if the stack is allocated in a new PGD (one that was not present at the moment of the efi page table creation or not synced in a previous vmalloc fault), the kernel will take a trap when switching to the efi page table when the vmalloc kernel stack is accessed, resulting in a kernel panic. Fix that by updating the efi kernel mappings before switching to the efi page table. | ||||
CVE-2022-48974 | 2 Linux, Redhat | 2 Linux Kernel, Enterprise Linux | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: netfilter: conntrack: fix using __this_cpu_add in preemptible Currently in nf_conntrack_hash_check_insert(), when it fails in nf_ct_ext_valid_pre/post(), NF_CT_STAT_INC() will be called in the preemptible context, a call trace can be triggered: BUG: using __this_cpu_add() in preemptible [00000000] code: conntrack/1636 caller is nf_conntrack_hash_check_insert+0x45/0x430 [nf_conntrack] Call Trace: <TASK> dump_stack_lvl+0x33/0x46 check_preemption_disabled+0xc3/0xf0 nf_conntrack_hash_check_insert+0x45/0x430 [nf_conntrack] ctnetlink_create_conntrack+0x3cd/0x4e0 [nf_conntrack_netlink] ctnetlink_new_conntrack+0x1c0/0x450 [nf_conntrack_netlink] nfnetlink_rcv_msg+0x277/0x2f0 [nfnetlink] netlink_rcv_skb+0x50/0x100 nfnetlink_rcv+0x65/0x144 [nfnetlink] netlink_unicast+0x1ae/0x290 netlink_sendmsg+0x257/0x4f0 sock_sendmsg+0x5f/0x70 This patch is to fix it by changing to use NF_CT_STAT_INC_ATOMIC() for nf_ct_ext_valid_pre/post() check in nf_conntrack_hash_check_insert(), as well as nf_ct_ext_valid_post() in __nf_conntrack_confirm(). Note that nf_ct_ext_valid_pre() check in __nf_conntrack_confirm() is safe to use NF_CT_STAT_INC(), as it's under local_bh_disable(). | ||||
CVE-2022-48897 | 1 Linux | 1 Linux Kernel | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: arm64/mm: fix incorrect file_map_count for invalid pmd The page table check trigger BUG_ON() unexpectedly when split hugepage: ------------[ cut here ]------------ kernel BUG at mm/page_table_check.c:119! Internal error: Oops - BUG: 00000000f2000800 [#1] SMP Dumping ftrace buffer: (ftrace buffer empty) Modules linked in: CPU: 7 PID: 210 Comm: transhuge-stres Not tainted 6.1.0-rc3+ #748 Hardware name: linux,dummy-virt (DT) pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : page_table_check_set.isra.0+0x398/0x468 lr : page_table_check_set.isra.0+0x1c0/0x468 [...] Call trace: page_table_check_set.isra.0+0x398/0x468 __page_table_check_pte_set+0x160/0x1c0 __split_huge_pmd_locked+0x900/0x1648 __split_huge_pmd+0x28c/0x3b8 unmap_page_range+0x428/0x858 unmap_single_vma+0xf4/0x1c8 zap_page_range+0x2b0/0x410 madvise_vma_behavior+0xc44/0xe78 do_madvise+0x280/0x698 __arm64_sys_madvise+0x90/0xe8 invoke_syscall.constprop.0+0xdc/0x1d8 do_el0_svc+0xf4/0x3f8 el0_svc+0x58/0x120 el0t_64_sync_handler+0xb8/0xc0 el0t_64_sync+0x19c/0x1a0 [...] On arm64, pmd_leaf() will return true even if the pmd is invalid due to pmd_present_invalid() check. So in pmdp_invalidate() the file_map_count will not only decrease once but also increase once. Then in set_pte_at(), the file_map_count increase again, and so trigger BUG_ON() unexpectedly. Add !pmd_present_invalid() check in pmd_user_accessible_page() to fix the problem. | ||||
CVE-2022-48891 | 1 Linux | 1 Linux Kernel | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: regulator: da9211: Use irq handler when ready If the system does not come from reset (like when it is kexec()), the regulator might have an IRQ waiting for us. If we enable the IRQ handler before its structures are ready, we crash. This patch fixes: [ 1.141839] Unable to handle kernel read from unreadable memory at virtual address 0000000000000078 [ 1.316096] Call trace: [ 1.316101] blocking_notifier_call_chain+0x20/0xa8 [ 1.322757] cpu cpu0: dummy supplies not allowed for exclusive requests [ 1.327823] regulator_notifier_call_chain+0x1c/0x2c [ 1.327825] da9211_irq_handler+0x68/0xf8 [ 1.327829] irq_thread+0x11c/0x234 [ 1.327833] kthread+0x13c/0x154 | ||||
CVE-2022-48808 | 1 Linux | 1 Linux Kernel | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: net: dsa: fix panic when DSA master device unbinds on shutdown Rafael reports that on a system with LX2160A and Marvell DSA switches, if a reboot occurs while the DSA master (dpaa2-eth) is up, the following panic can be seen: systemd-shutdown[1]: Rebooting. Unable to handle kernel paging request at virtual address 00a0000800000041 [00a0000800000041] address between user and kernel address ranges Internal error: Oops: 96000004 [#1] PREEMPT SMP CPU: 6 PID: 1 Comm: systemd-shutdow Not tainted 5.16.5-00042-g8f5585009b24 #32 pc : dsa_slave_netdevice_event+0x130/0x3e4 lr : raw_notifier_call_chain+0x50/0x6c Call trace: dsa_slave_netdevice_event+0x130/0x3e4 raw_notifier_call_chain+0x50/0x6c call_netdevice_notifiers_info+0x54/0xa0 __dev_close_many+0x50/0x130 dev_close_many+0x84/0x120 unregister_netdevice_many+0x130/0x710 unregister_netdevice_queue+0x8c/0xd0 unregister_netdev+0x20/0x30 dpaa2_eth_remove+0x68/0x190 fsl_mc_driver_remove+0x20/0x5c __device_release_driver+0x21c/0x220 device_release_driver_internal+0xac/0xb0 device_links_unbind_consumers+0xd4/0x100 __device_release_driver+0x94/0x220 device_release_driver+0x28/0x40 bus_remove_device+0x118/0x124 device_del+0x174/0x420 fsl_mc_device_remove+0x24/0x40 __fsl_mc_device_remove+0xc/0x20 device_for_each_child+0x58/0xa0 dprc_remove+0x90/0xb0 fsl_mc_driver_remove+0x20/0x5c __device_release_driver+0x21c/0x220 device_release_driver+0x28/0x40 bus_remove_device+0x118/0x124 device_del+0x174/0x420 fsl_mc_bus_remove+0x80/0x100 fsl_mc_bus_shutdown+0xc/0x1c platform_shutdown+0x20/0x30 device_shutdown+0x154/0x330 __do_sys_reboot+0x1cc/0x250 __arm64_sys_reboot+0x20/0x30 invoke_syscall.constprop.0+0x4c/0xe0 do_el0_svc+0x4c/0x150 el0_svc+0x24/0xb0 el0t_64_sync_handler+0xa8/0xb0 el0t_64_sync+0x178/0x17c It can be seen from the stack trace that the problem is that the deregistration of the master causes a dev_close(), which gets notified as NETDEV_GOING_DOWN to dsa_slave_netdevice_event(). But dsa_switch_shutdown() has already run, and this has unregistered the DSA slave interfaces, and yet, the NETDEV_GOING_DOWN handler attempts to call dev_close_many() on those slave interfaces, leading to the problem. The previous attempt to avoid the NETDEV_GOING_DOWN on the master after dsa_switch_shutdown() was called seems improper. Unregistering the slave interfaces is unnecessary and unhelpful. Instead, after the slaves have stopped being uppers of the DSA master, we can now reset to NULL the master->dsa_ptr pointer, which will make DSA start ignoring all future notifier events on the master. | ||||
CVE-2022-48629 | 1 Linux | 1 Linux Kernel | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: crypto: qcom-rng - ensure buffer for generate is completely filled The generate function in struct rng_alg expects that the destination buffer is completely filled if the function returns 0. qcom_rng_read() can run into a situation where the buffer is partially filled with randomness and the remaining part of the buffer is zeroed since qcom_rng_generate() doesn't check the return value. This issue can be reproduced by running the following from libkcapi: kcapi-rng -b 9000000 > OUTFILE The generated OUTFILE will have three huge sections that contain all zeros, and this is caused by the code where the test 'val & PRNG_STATUS_DATA_AVAIL' fails. Let's fix this issue by ensuring that qcom_rng_read() always returns with a full buffer if the function returns success. Let's also have qcom_rng_generate() return the correct value. Here's some statistics from the ent project (https://www.fourmilab.ch/random/) that shows information about the quality of the generated numbers: $ ent -c qcom-random-before Value Char Occurrences Fraction 0 606748 0.067416 1 33104 0.003678 2 33001 0.003667 ... 253 � 32883 0.003654 254 � 33035 0.003671 255 � 33239 0.003693 Total: 9000000 1.000000 Entropy = 7.811590 bits per byte. Optimum compression would reduce the size of this 9000000 byte file by 2 percent. Chi square distribution for 9000000 samples is 9329962.81, and randomly would exceed this value less than 0.01 percent of the times. Arithmetic mean value of data bytes is 119.3731 (127.5 = random). Monte Carlo value for Pi is 3.197293333 (error 1.77 percent). Serial correlation coefficient is 0.159130 (totally uncorrelated = 0.0). Without this patch, the results of the chi-square test is 0.01%, and the numbers are certainly not random according to ent's project page. The results improve with this patch: $ ent -c qcom-random-after Value Char Occurrences Fraction 0 35432 0.003937 1 35127 0.003903 2 35424 0.003936 ... 253 � 35201 0.003911 254 � 34835 0.003871 255 � 35368 0.003930 Total: 9000000 1.000000 Entropy = 7.999979 bits per byte. Optimum compression would reduce the size of this 9000000 byte file by 0 percent. Chi square distribution for 9000000 samples is 258.77, and randomly would exceed this value 42.24 percent of the times. Arithmetic mean value of data bytes is 127.5006 (127.5 = random). Monte Carlo value for Pi is 3.141277333 (error 0.01 percent). Serial correlation coefficient is 0.000468 (totally uncorrelated = 0.0). This change was tested on a Nexus 5 phone (msm8974 SoC). | ||||
CVE-2023-52880 | 1 Redhat | 2 Enterprise Linux, Rhel Eus | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: tty: n_gsm: require CAP_NET_ADMIN to attach N_GSM0710 ldisc Any unprivileged user can attach N_GSM0710 ldisc, but it requires CAP_NET_ADMIN to create a GSM network anyway. Require initial namespace CAP_NET_ADMIN to do that. | ||||
CVE-2023-52634 | 2 Linux, Redhat | 2 Linux Kernel, Enterprise Linux | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: drm/amd/display: Fix disable_otg_wa logic [Why] When switching to another HDMI mode, we are unnecesarilly disabling/enabling FIFO causing both HPO and DIG registers to be set at the same time when only HPO is supposed to be set. This can lead to a system hang the next time we change refresh rates as there are cases when we don't disable OTG/FIFO but FIFO is enabled when it isn't supposed to be. [How] Removing the enable/disable FIFO entirely. | ||||
CVE-2023-52619 | 1 Redhat | 2 Enterprise Linux, Rhel Eus | 2025-05-04 | 5.5 Medium |
In the Linux kernel, the following vulnerability has been resolved: pstore/ram: Fix crash when setting number of cpus to an odd number When the number of cpu cores is adjusted to 7 or other odd numbers, the zone size will become an odd number. The address of the zone will become: addr of zone0 = BASE addr of zone1 = BASE + zone_size addr of zone2 = BASE + zone_size*2 ... The address of zone1/3/5/7 will be mapped to non-alignment va. Eventually crashes will occur when accessing these va. So, use ALIGN_DOWN() to make sure the zone size is even to avoid this bug. | ||||
CVE-2023-52495 | 1 Linux | 1 Linux Kernel | 2025-05-04 | 7.8 High |
In the Linux kernel, the following vulnerability has been resolved: soc: qcom: pmic_glink_altmode: fix port sanity check The PMIC GLINK altmode driver currently supports at most two ports. Fix the incomplete port sanity check on notifications to avoid accessing and corrupting memory beyond the port array if we ever get a notification for an unsupported port. | ||||
CVE-2025-21994 | 2025-05-04 | 5.5 Medium | ||
In the Linux kernel, the following vulnerability has been resolved: ksmbd: fix incorrect validation for num_aces field of smb_acl parse_dcal() validate num_aces to allocate posix_ace_state_array. if (num_aces > ULONG_MAX / sizeof(struct smb_ace *)) It is an incorrect validation that we can create an array of size ULONG_MAX. smb_acl has ->size field to calculate actual number of aces in request buffer size. Use this to check invalid num_aces. | ||||
CVE-2025-21988 | 2025-05-04 | 5.5 Medium | ||
In the Linux kernel, the following vulnerability has been resolved: fs/netfs/read_collect: add to next->prev_donated If multiple subrequests donate data to the same "next" request (depending on the subrequest completion order), each of them would overwrite the `prev_donated` field, causing data corruption and a BUG() crash ("Can't donate prior to front"). | ||||
CVE-2025-21886 | 2025-05-04 | 4.4 Medium | ||
In the Linux kernel, the following vulnerability has been resolved: RDMA/mlx5: Fix implicit ODP hang on parent deregistration Fix the destroy_unused_implicit_child_mr() to prevent hanging during parent deregistration as of below [1]. Upon entering destroy_unused_implicit_child_mr(), the reference count for the implicit MR parent is incremented using: refcount_inc_not_zero(). A corresponding decrement must be performed if free_implicit_child_mr_work() is not called. The code has been updated to properly manage the reference count that was incremented. [1] INFO: task python3:2157 blocked for more than 120 seconds. Not tainted 6.12.0-rc7+ #1633 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:python3 state:D stack:0 pid:2157 tgid:2157 ppid:1685 flags:0x00000000 Call Trace: <TASK> __schedule+0x420/0xd30 schedule+0x47/0x130 __mlx5_ib_dereg_mr+0x379/0x5d0 [mlx5_ib] ? __pfx_autoremove_wake_function+0x10/0x10 ib_dereg_mr_user+0x5f/0x120 [ib_core] ? lock_release+0xc6/0x280 destroy_hw_idr_uobject+0x1d/0x60 [ib_uverbs] uverbs_destroy_uobject+0x58/0x1d0 [ib_uverbs] uobj_destroy+0x3f/0x70 [ib_uverbs] ib_uverbs_cmd_verbs+0x3e4/0xbb0 [ib_uverbs] ? __pfx_uverbs_destroy_def_handler+0x10/0x10 [ib_uverbs] ? lock_acquire+0xc1/0x2f0 ? ib_uverbs_ioctl+0xcb/0x170 [ib_uverbs] ? ib_uverbs_ioctl+0x116/0x170 [ib_uverbs] ? lock_release+0xc6/0x280 ib_uverbs_ioctl+0xe7/0x170 [ib_uverbs] ? ib_uverbs_ioctl+0xcb/0x170 [ib_uverbs] __x64_sys_ioctl+0x1b0/0xa70 ? kmem_cache_free+0x221/0x400 do_syscall_64+0x6b/0x140 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f20f21f017b RSP: 002b:00007ffcfc4a77c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007ffcfc4a78d8 RCX: 00007f20f21f017b RDX: 00007ffcfc4a78c0 RSI: 00000000c0181b01 RDI: 0000000000000003 RBP: 00007ffcfc4a78a0 R08: 000056147d125190 R09: 00007f20f1f14c60 R10: 0000000000000001 R11: 0000000000000246 R12: 00007ffcfc4a7890 R13: 000000000000001c R14: 000056147d100fc0 R15: 00007f20e365c9d0 </TASK> | ||||
CVE-2025-21860 | 1 Linux | 1 Linux Kernel | 2025-05-04 | 3.3 Low |
In the Linux kernel, the following vulnerability has been resolved: mm/zswap: fix inconsistency when zswap_store_page() fails Commit b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()") skips charging any zswap entries when it failed to zswap the entire folio. However, when some base pages are zswapped but it failed to zswap the entire folio, the zswap operation is rolled back. When freeing zswap entries for those pages, zswap_entry_free() uncharges the zswap entries that were not previously charged, causing zswap charging to become inconsistent. This inconsistency triggers two warnings with following steps: # On a machine with 64GiB of RAM and 36GiB of zswap $ stress-ng --bigheap 2 # wait until the OOM-killer kills stress-ng $ sudo reboot The two warnings are: in mm/memcontrol.c:163, function obj_cgroup_release(): WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1)); in mm/page_counter.c:60, function page_counter_cancel(): if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n", new, nr_pages)) zswap_stored_pages also becomes inconsistent in the same way. As suggested by Kanchana, increment zswap_stored_pages and charge zswap entries within zswap_store_page() when it succeeds. This way, zswap_entry_free() will decrement the counter and uncharge the entries when it failed to zswap the entire folio. While this could potentially be optimized by batching objcg charging and incrementing the counter, let's focus on fixing the bug this time and leave the optimization for later after some evaluation. After resolving the inconsistency, the warnings disappear. [[email protected]: refactor zswap_store_page()] | ||||
CVE-2025-21842 | 2025-05-04 | 4.4 Medium | ||
In the Linux kernel, the following vulnerability has been resolved: amdkfd: properly free gang_ctx_bo when failed to init user queue The destructor of a gtt bo is declared as void amdgpu_amdkfd_free_gtt_mem(struct amdgpu_device *adev, void **mem_obj); Which takes void** as the second parameter. GCC allows passing void* to the function because void* can be implicitly casted to any other types, so it can pass compiling. However, passing this void* parameter into the function's execution process(which expects void** and dereferencing void**) will result in errors. | ||||
CVE-2025-21829 | 2025-05-04 | 4.4 Medium | ||
In the Linux kernel, the following vulnerability has been resolved: RDMA/rxe: Fix the warning "__rxe_cleanup+0x12c/0x170 [rdma_rxe]" The Call Trace is as below: " <TASK> ? show_regs.cold+0x1a/0x1f ? __rxe_cleanup+0x12c/0x170 [rdma_rxe] ? __warn+0x84/0xd0 ? __rxe_cleanup+0x12c/0x170 [rdma_rxe] ? report_bug+0x105/0x180 ? handle_bug+0x46/0x80 ? exc_invalid_op+0x19/0x70 ? asm_exc_invalid_op+0x1b/0x20 ? __rxe_cleanup+0x12c/0x170 [rdma_rxe] ? __rxe_cleanup+0x124/0x170 [rdma_rxe] rxe_destroy_qp.cold+0x24/0x29 [rdma_rxe] ib_destroy_qp_user+0x118/0x190 [ib_core] rdma_destroy_qp.cold+0x43/0x5e [rdma_cm] rtrs_cq_qp_destroy.cold+0x1d/0x2b [rtrs_core] rtrs_srv_close_work.cold+0x1b/0x31 [rtrs_server] process_one_work+0x21d/0x3f0 worker_thread+0x4a/0x3c0 ? process_one_work+0x3f0/0x3f0 kthread+0xf0/0x120 ? kthread_complete_and_exit+0x20/0x20 ret_from_fork+0x22/0x30 </TASK> " When too many rdma resources are allocated, rxe needs more time to handle these rdma resources. Sometimes with the current timeout, rxe can not release the rdma resources correctly. Compared with other rdma drivers, a bigger timeout is used. | ||||
CVE-2025-21825 | 2025-05-04 | 3.3 Low | ||
In the Linux kernel, the following vulnerability has been resolved: bpf: Cancel the running bpf_timer through kworker for PREEMPT_RT During the update procedure, when overwrite element in a pre-allocated htab, the freeing of old_element is protected by the bucket lock. The reason why the bucket lock is necessary is that the old_element has already been stashed in htab->extra_elems after alloc_htab_elem() returns. If freeing the old_element after the bucket lock is unlocked, the stashed element may be reused by concurrent update procedure and the freeing of old_element will run concurrently with the reuse of the old_element. However, the invocation of check_and_free_fields() may acquire a spin-lock which violates the lockdep rule because its caller has already held a raw-spin-lock (bucket lock). The following warning will be reported when such race happens: BUG: scheduling while atomic: test_progs/676/0x00000003 3 locks held by test_progs/676: #0: ffffffff864b0240 (rcu_read_lock_trace){....}-{0:0}, at: bpf_prog_test_run_syscall+0x2c0/0x830 #1: ffff88810e961188 (&htab->lockdep_key){....}-{2:2}, at: htab_map_update_elem+0x306/0x1500 #2: ffff8881f4eac1b8 (&base->softirq_expiry_lock){....}-{2:2}, at: hrtimer_cancel_wait_running+0xe9/0x1b0 Modules linked in: bpf_testmod(O) Preemption disabled at: [<ffffffff817837a3>] htab_map_update_elem+0x293/0x1500 CPU: 0 UID: 0 PID: 676 Comm: test_progs Tainted: G ... 6.12.0+ #11 Tainted: [W]=WARN, [O]=OOT_MODULE Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)... Call Trace: <TASK> dump_stack_lvl+0x57/0x70 dump_stack+0x10/0x20 __schedule_bug+0x120/0x170 __schedule+0x300c/0x4800 schedule_rtlock+0x37/0x60 rtlock_slowlock_locked+0x6d9/0x54c0 rt_spin_lock+0x168/0x230 hrtimer_cancel_wait_running+0xe9/0x1b0 hrtimer_cancel+0x24/0x30 bpf_timer_delete_work+0x1d/0x40 bpf_timer_cancel_and_free+0x5e/0x80 bpf_obj_free_fields+0x262/0x4a0 check_and_free_fields+0x1d0/0x280 htab_map_update_elem+0x7fc/0x1500 bpf_prog_9f90bc20768e0cb9_overwrite_cb+0x3f/0x43 bpf_prog_ea601c4649694dbd_overwrite_timer+0x5d/0x7e bpf_prog_test_run_syscall+0x322/0x830 __sys_bpf+0x135d/0x3ca0 __x64_sys_bpf+0x75/0xb0 x64_sys_call+0x1b5/0xa10 do_syscall_64+0x3b/0xc0 entry_SYSCALL_64_after_hwframe+0x4b/0x53 ... </TASK> It seems feasible to break the reuse and refill of per-cpu extra_elems into two independent parts: reuse the per-cpu extra_elems with bucket lock being held and refill the old_element as per-cpu extra_elems after the bucket lock is unlocked. However, it will make the concurrent overwrite procedures on the same CPU return unexpected -E2BIG error when the map is full. Therefore, the patch fixes the lock problem by breaking the cancelling of bpf_timer into two steps for PREEMPT_RT: 1) use hrtimer_try_to_cancel() and check its return value 2) if the timer is running, use hrtimer_cancel() through a kworker to cancel it again Considering that the current implementation of hrtimer_cancel() will try to acquire a being held softirq_expiry_lock when the current timer is running, these steps above are reasonable. However, it also has downside. When the timer is running, the cancelling of the timer is delayed when releasing the last map uref. The delay is also fixable (e.g., break the cancelling of bpf timer into two parts: one part in locked scope, another one in unlocked scope), it can be revised later if necessary. It is a bit hard to decide the right fix tag. One reason is that the problem depends on PREEMPT_RT which is enabled in v6.12. Considering the softirq_expiry_lock lock exists since v5.4 and bpf_timer is introduced in v5.15, the bpf_timer commit is used in the fixes tag and an extra depends-on tag is added to state the dependency on PREEMPT_RT. Depends-on: v6.12+ with PREEMPT_RT enabled | ||||
CVE-2025-21795 | 2025-05-04 | 5.5 Medium | ||
In the Linux kernel, the following vulnerability has been resolved: NFSD: fix hang in nfsd4_shutdown_callback If nfs4_client is in courtesy state then there is no point to send the callback. This causes nfsd4_shutdown_callback to hang since cl_cb_inflight is not 0. This hang lasts about 15 minutes until TCP notifies NFSD that the connection was dropped. This patch modifies nfsd4_run_cb_work to skip the RPC call if nfs4_client is in courtesy state. |