-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes for sec_mipi_dsim-imx #2
Open
MrCry0
wants to merge
1
commit into
nxp-imx:lf-5.15.y
Choose a base branch
from
MrCry0:lf-5.15.y-dsim
base: lf-5.15.y
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
MrCry0
commented
Nov 1, 2022
•
edited
Loading
edited
- A minor fix of an error message
The message appears when we have no VALID reset controls. Fix it. Fixes: a31678d ("drm/imx: Replace reset flow for DSIM") Signed-off-by: Oleksandr Suvorov <[email protected]>
54689bb
to
2403fe3
Compare
louts-rock
reviewed
Jul 17, 2023
@@ -330,7 +330,7 @@ static int sec_dsim_of_parse_resets(struct imx_sec_dsim_device *dsim) | |||
} | |||
|
|||
if (!rstc_num) { | |||
dev_err(dev, "no invalid reset control exists\n"); | |||
dev_err(dev, "no valid reset control exists\n"); | |||
return -EINVAL; | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this
sebastient
pushed a commit
to MaivinAI/linux-maivin
that referenced
this pull request
Aug 3, 2023
[ Upstream commit 8e93f29 ] The lag_lock is taken from both process and softirq contexts which results lockdep warning[0] about potential deadlock. However, just disabling softirqs by using *_bh spinlock API is not enough since it will cause warning in some contexts where the lock is obtained with hard irqs disabled. To fix the issue save current irq state, disable them before obtaining the lock an re-enable irqs from saved state after releasing it. [0]: [Sun Aug 7 13:12:29 2022] ================================ [Sun Aug 7 13:12:29 2022] WARNING: inconsistent lock state [Sun Aug 7 13:12:29 2022] 5.19.0_for_upstream_debug_2022_08_04_16_06 nxp-imx#1 Not tainted [Sun Aug 7 13:12:29 2022] -------------------------------- [Sun Aug 7 13:12:29 2022] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage. [Sun Aug 7 13:12:29 2022] swapper/0/0 [HC0[0]:SC1[1]:HE1:SE0] takes: [Sun Aug 7 13:12:29 2022] ffffffffa06dc0d8 (lag_lock){+.?.}-{2:2}, at: mlx5_lag_is_shared_fdb+0x1f/0x120 [mlx5_core] [Sun Aug 7 13:12:29 2022] {SOFTIRQ-ON-W} state was registered at: [Sun Aug 7 13:12:29 2022] lock_acquire+0x1c1/0x550 [Sun Aug 7 13:12:29 2022] _raw_spin_lock+0x2c/0x40 [Sun Aug 7 13:12:29 2022] mlx5_lag_add_netdev+0x13b/0x480 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5e_nic_enable+0x114/0x470 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5e_attach_netdev+0x30e/0x6a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5e_resume+0x105/0x160 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5e_probe+0xac3/0x14f0 [mlx5_core] [Sun Aug 7 13:12:29 2022] auxiliary_bus_probe+0x9d/0xe0 [Sun Aug 7 13:12:29 2022] really_probe+0x1e0/0xaa0 [Sun Aug 7 13:12:29 2022] __driver_probe_device+0x219/0x480 [Sun Aug 7 13:12:29 2022] driver_probe_device+0x49/0x130 [Sun Aug 7 13:12:29 2022] __driver_attach+0x1e4/0x4d0 [Sun Aug 7 13:12:29 2022] bus_for_each_dev+0x11e/0x1a0 [Sun Aug 7 13:12:29 2022] bus_add_driver+0x3f4/0x5a0 [Sun Aug 7 13:12:29 2022] driver_register+0x20f/0x390 [Sun Aug 7 13:12:29 2022] __auxiliary_driver_register+0x14e/0x260 [Sun Aug 7 13:12:29 2022] mlx5e_init+0x38/0x90 [mlx5_core] [Sun Aug 7 13:12:29 2022] vhost_iotlb_itree_augment_rotate+0xcb/0x180 [vhost_iotlb] [Sun Aug 7 13:12:29 2022] do_one_initcall+0xc4/0x400 [Sun Aug 7 13:12:29 2022] do_init_module+0x18a/0x620 [Sun Aug 7 13:12:29 2022] load_module+0x563a/0x7040 [Sun Aug 7 13:12:29 2022] __do_sys_finit_module+0x122/0x1d0 [Sun Aug 7 13:12:29 2022] do_syscall_64+0x3d/0x90 [Sun Aug 7 13:12:29 2022] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [Sun Aug 7 13:12:29 2022] irq event stamp: 3596508 [Sun Aug 7 13:12:29 2022] hardirqs last enabled at (3596508): [<ffffffff813687c2>] __local_bh_enable_ip+0xa2/0x100 [Sun Aug 7 13:12:29 2022] hardirqs last disabled at (3596507): [<ffffffff813687da>] __local_bh_enable_ip+0xba/0x100 [Sun Aug 7 13:12:29 2022] softirqs last enabled at (3596488): [<ffffffff81368a2a>] irq_exit_rcu+0x11a/0x170 [Sun Aug 7 13:12:29 2022] softirqs last disabled at (3596495): [<ffffffff81368a2a>] irq_exit_rcu+0x11a/0x170 [Sun Aug 7 13:12:29 2022] other info that might help us debug this: [Sun Aug 7 13:12:29 2022] Possible unsafe locking scenario: [Sun Aug 7 13:12:29 2022] CPU0 [Sun Aug 7 13:12:29 2022] ---- [Sun Aug 7 13:12:29 2022] lock(lag_lock); [Sun Aug 7 13:12:29 2022] <Interrupt> [Sun Aug 7 13:12:29 2022] lock(lag_lock); [Sun Aug 7 13:12:29 2022] *** DEADLOCK *** [Sun Aug 7 13:12:29 2022] 4 locks held by swapper/0/0: [Sun Aug 7 13:12:29 2022] #0: ffffffff84643260 (rcu_read_lock){....}-{1:2}, at: mlx5e_napi_poll+0x43/0x20a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] nxp-imx#1: ffffffff84643260 (rcu_read_lock){....}-{1:2}, at: netif_receive_skb_list_internal+0x2d7/0xd60 [Sun Aug 7 13:12:29 2022] nxp-imx#2: ffff888144a18b58 (&br->hash_lock){+.-.}-{2:2}, at: br_fdb_update+0x301/0x570 [Sun Aug 7 13:12:29 2022] nxp-imx#3: ffffffff84643260 (rcu_read_lock){....}-{1:2}, at: atomic_notifier_call_chain+0x5/0x1d0 [Sun Aug 7 13:12:29 2022] stack backtrace: [Sun Aug 7 13:12:29 2022] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.19.0_for_upstream_debug_2022_08_04_16_06 nxp-imx#1 [Sun Aug 7 13:12:29 2022] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [Sun Aug 7 13:12:29 2022] Call Trace: [Sun Aug 7 13:12:29 2022] <IRQ> [Sun Aug 7 13:12:29 2022] dump_stack_lvl+0x57/0x7d [Sun Aug 7 13:12:29 2022] mark_lock.part.0.cold+0x5f/0x92 [Sun Aug 7 13:12:29 2022] ? lock_chain_count+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? unwind_next_frame+0x1c4/0x1b50 [Sun Aug 7 13:12:29 2022] ? secondary_startup_64_no_verify+0xcd/0xdb [Sun Aug 7 13:12:29 2022] ? mlx5e_napi_poll+0x4e9/0x20a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? mlx5e_napi_poll+0x4e9/0x20a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? stack_access_ok+0x1d0/0x1d0 [Sun Aug 7 13:12:29 2022] ? start_kernel+0x3a7/0x3c5 [Sun Aug 7 13:12:29 2022] __lock_acquire+0x1260/0x6720 [Sun Aug 7 13:12:29 2022] ? lock_chain_count+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? lock_chain_count+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? register_lock_class+0x1880/0x1880 [Sun Aug 7 13:12:29 2022] ? mark_lock.part.0+0xed/0x3060 [Sun Aug 7 13:12:29 2022] ? stack_trace_save+0x91/0xc0 [Sun Aug 7 13:12:29 2022] lock_acquire+0x1c1/0x550 [Sun Aug 7 13:12:29 2022] ? mlx5_lag_is_shared_fdb+0x1f/0x120 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? lockdep_hardirqs_on_prepare+0x400/0x400 [Sun Aug 7 13:12:29 2022] ? __lock_acquire+0xd6f/0x6720 [Sun Aug 7 13:12:29 2022] _raw_spin_lock+0x2c/0x40 [Sun Aug 7 13:12:29 2022] ? mlx5_lag_is_shared_fdb+0x1f/0x120 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5_lag_is_shared_fdb+0x1f/0x120 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5_esw_bridge_rep_vport_num_vhca_id_get+0x1a0/0x600 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? mlx5_esw_bridge_update_work+0x90/0x90 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? lock_acquire+0x1c1/0x550 [Sun Aug 7 13:12:29 2022] mlx5_esw_bridge_switchdev_event+0x185/0x8f0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? mlx5_esw_bridge_port_obj_attr_set+0x3e0/0x3e0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] atomic_notifier_call_chain+0xd7/0x1d0 [Sun Aug 7 13:12:29 2022] br_switchdev_fdb_notify+0xea/0x100 [Sun Aug 7 13:12:29 2022] ? br_switchdev_set_port_flag+0x310/0x310 [Sun Aug 7 13:12:29 2022] fdb_notify+0x11b/0x150 [Sun Aug 7 13:12:29 2022] br_fdb_update+0x34c/0x570 [Sun Aug 7 13:12:29 2022] ? lock_chain_count+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? br_fdb_add_local+0x50/0x50 [Sun Aug 7 13:12:29 2022] ? br_allowed_ingress+0x5f/0x1070 [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] br_handle_frame_finish+0x786/0x18e0 [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? __lock_acquire+0xd6f/0x6720 [Sun Aug 7 13:12:29 2022] ? sctp_inet_bind_verify+0x4d/0x190 [Sun Aug 7 13:12:29 2022] ? xlog_unpack_data+0x2e0/0x310 [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] br_nf_hook_thresh+0x227/0x380 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? setup_pre_routing+0x460/0x460 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? br_nf_pre_routing_ipv6+0x48b/0x69c [br_netfilter] [Sun Aug 7 13:12:29 2022] br_nf_pre_routing_finish_ipv6+0x5c2/0xbf0 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] br_nf_pre_routing_ipv6+0x4c6/0x69c [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_validate_ipv6+0x9e0/0x9e0 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_nf_forward_arp+0xb70/0xb70 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_nf_pre_routing+0xacf/0x1160 [br_netfilter] [Sun Aug 7 13:12:29 2022] br_handle_frame+0x8a9/0x1270 [Sun Aug 7 13:12:29 2022] ? br_handle_frame_finish+0x18e0/0x18e0 [Sun Aug 7 13:12:29 2022] ? register_lock_class+0x1880/0x1880 [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? bond_handle_frame+0xf9/0xac0 [bonding] [Sun Aug 7 13:12:29 2022] ? br_handle_frame_finish+0x18e0/0x18e0 [Sun Aug 7 13:12:29 2022] __netif_receive_skb_core+0x7c0/0x2c70 [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] ? generic_xdp_tx+0x5b0/0x5b0 [Sun Aug 7 13:12:29 2022] ? __lock_acquire+0xd6f/0x6720 [Sun Aug 7 13:12:29 2022] ? register_lock_class+0x1880/0x1880 [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] __netif_receive_skb_list_core+0x2d7/0x8a0 [Sun Aug 7 13:12:29 2022] ? lock_acquire+0x1c1/0x550 [Sun Aug 7 13:12:29 2022] ? process_backlog+0x960/0x960 [Sun Aug 7 13:12:29 2022] ? lockdep_hardirqs_on_prepare+0x129/0x400 [Sun Aug 7 13:12:29 2022] ? kvm_clock_get_cycles+0x14/0x20 [Sun Aug 7 13:12:29 2022] netif_receive_skb_list_internal+0x5f4/0xd60 [Sun Aug 7 13:12:29 2022] ? do_xdp_generic+0x150/0x150 [Sun Aug 7 13:12:29 2022] ? mlx5e_poll_rx_cq+0xf6b/0x2960 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? mlx5e_poll_ico_cq+0x3d/0x1590 [mlx5_core] [Sun Aug 7 13:12:29 2022] napi_complete_done+0x188/0x710 [Sun Aug 7 13:12:29 2022] mlx5e_napi_poll+0x4e9/0x20a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? __queue_work+0x53c/0xeb0 [Sun Aug 7 13:12:29 2022] __napi_poll+0x9f/0x540 [Sun Aug 7 13:12:29 2022] net_rx_action+0x420/0xb70 [Sun Aug 7 13:12:29 2022] ? napi_threaded_poll+0x470/0x470 [Sun Aug 7 13:12:29 2022] ? __common_interrupt+0x79/0x1a0 [Sun Aug 7 13:12:29 2022] __do_softirq+0x271/0x92c [Sun Aug 7 13:12:29 2022] irq_exit_rcu+0x11a/0x170 [Sun Aug 7 13:12:29 2022] common_interrupt+0x7d/0xa0 [Sun Aug 7 13:12:29 2022] </IRQ> [Sun Aug 7 13:12:29 2022] <TASK> [Sun Aug 7 13:12:29 2022] asm_common_interrupt+0x22/0x40 [Sun Aug 7 13:12:29 2022] RIP: 0010:default_idle+0x42/0x60 [Sun Aug 7 13:12:29 2022] Code: c1 83 e0 07 48 c1 e9 03 83 c0 03 0f b6 14 11 38 d0 7c 04 84 d2 75 14 8b 05 6b f1 22 02 85 c0 7e 07 0f 00 2d 80 3b 4a 00 fb f4 <c3> 48 c7 c7 e0 07 7e 85 e8 21 bd 40 fe eb de 66 66 2e 0f 1f 84 00 [Sun Aug 7 13:12:29 2022] RSP: 0018:ffffffff84407e18 EFLAGS: 00000242 [Sun Aug 7 13:12:29 2022] RAX: 0000000000000001 RBX: ffffffff84ec4a68 RCX: 1ffffffff0afc0fc [Sun Aug 7 13:12:29 2022] RDX: 0000000000000004 RSI: 0000000000000000 RDI: ffffffff835b1fac [Sun Aug 7 13:12:29 2022] RBP: 0000000000000000 R08: 0000000000000001 R09: ffff8884d2c44ac3 [Sun Aug 7 13:12:29 2022] R10: ffffed109a588958 R11: 00000000ffffffff R12: 0000000000000000 [Sun Aug 7 13:12:29 2022] R13: ffffffff84efac20 R14: 0000000000000000 R15: dffffc0000000000 [Sun Aug 7 13:12:29 2022] ? default_idle_call+0xcc/0x460 [Sun Aug 7 13:12:29 2022] default_idle_call+0xec/0x460 [Sun Aug 7 13:12:29 2022] do_idle+0x394/0x450 [Sun Aug 7 13:12:29 2022] ? arch_cpu_idle_exit+0x40/0x40 [Sun Aug 7 13:12:29 2022] cpu_startup_entry+0x19/0x20 [Sun Aug 7 13:12:29 2022] rest_init+0x156/0x250 [Sun Aug 7 13:12:29 2022] arch_call_rest_init+0xf/0x15 [Sun Aug 7 13:12:29 2022] start_kernel+0x3a7/0x3c5 [Sun Aug 7 13:12:29 2022] secondary_startup_64_no_verify+0xcd/0xdb [Sun Aug 7 13:12:29 2022] </TASK> Fixes: ff9b752 ("net/mlx5: Bridge, support LAG") Signed-off-by: Vlad Buslov <[email protected]> Reviewed-by: Mark Bloch <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
sebastient
pushed a commit
to MaivinAI/linux-maivin
that referenced
this pull request
Aug 3, 2023
[ Upstream commit 2cbb958 ] When we get a DMA channel and try to use it in multiple threads it will cause oops and hanging the system. % echo 100 > /sys/module/dmatest/parameters/threads_per_chan % echo 100 > /sys/module/dmatest/parameters/iterations % echo 1 > /sys/module/dmatest/parameters/run [383493.327077] Unable to handle kernel paging request at virtual address dead000000000108 [383493.335103] Mem abort info: [383493.335103] ESR = 0x96000044 [383493.335105] EC = 0x25: DABT (current EL), IL = 32 bits [383493.335107] SET = 0, FnV = 0 [383493.335108] EA = 0, S1PTW = 0 [383493.335109] FSC = 0x04: level 0 translation fault [383493.335110] Data abort info: [383493.335111] ISV = 0, ISS = 0x00000044 [383493.364739] CM = 0, WnR = 1 [383493.367793] [dead000000000108] address between user and kernel address ranges [383493.375021] Internal error: Oops: 96000044 [nxp-imx#1] PREEMPT SMP [383493.437574] CPU: 63 PID: 27895 Comm: dma0chan0-copy2 Kdump: loaded Tainted: GO 5.17.0-rc4+ nxp-imx#2 [383493.457851] pstate: 204000c9 (nzCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [383493.465331] pc : vchan_tx_submit+0x64/0xa0 [383493.469957] lr : vchan_tx_submit+0x34/0xa0 This occurs because the transmission timed out, and that's due to data race. Each thread rewrite channels's descriptor as soon as device_issue_pending is called. It leads to the situation that the driver thinks that it uses the right descriptor in interrupt handler while channels's descriptor has been changed by other thread. The descriptor which in fact reported interrupt will not be handled any more, as well as its tx->callback. That's why timeout reports. With current fixes channels' descriptor changes it's value only when it has been used. A new descriptor is acquired from vc->desc_issued queue that is already filled with descriptors that are ready to be sent. Threads have no direct access to DMA channel descriptor. In case of channel's descriptor is busy, try to submit to HW again when a descriptor is completed. In this case, vc->desc_issued may be empty when hisi_dma_start_transfer is called, so delete error reporting on this. Now it is just possible to queue a descriptor for further processing. Fixes: e9f08b6 ("dmaengine: hisilicon: Add Kunpeng DMA engine support") Signed-off-by: Jie Hai <[email protected]> Acked-by: Zhou Wang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Vinod Koul <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
sebastient
pushed a commit
to MaivinAI/linux-maivin
that referenced
this pull request
Aug 3, 2023
[ Upstream commit 99ee931 ] There is a recursive lock on the cpu_hotplug_lock. In kernel/trace/trace_osnoise.c:<start/stop>_per_cpu_kthreads: - start_per_cpu_kthreads calls cpus_read_lock() and if start_kthreads returns a error it will call stop_per_cpu_kthreads. - stop_per_cpu_kthreads then calls cpus_read_lock() again causing deadlock. Fix this by calling cpus_read_unlock() before calling stop_per_cpu_kthreads. This behavior can also be seen in commit f46b165 ("trace/hwlat: Implement the per-cpu mode"). This error was noticed during the LTP ftrace-stress-test: WARNING: possible recursive locking detected -------------------------------------------- sh/275006 is trying to acquire lock: ffffffffb02f5400 (cpu_hotplug_lock){++++}-{0:0}, at: stop_per_cpu_kthreads but task is already holding lock: ffffffffb02f5400 (cpu_hotplug_lock){++++}-{0:0}, at: start_per_cpu_kthreads other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(cpu_hotplug_lock); lock(cpu_hotplug_lock); *** DEADLOCK *** May be due to missing lock nesting notation 3 locks held by sh/275006: #0: ffff8881023f0470 (sb_writers#24){.+.+}-{0:0}, at: ksys_write nxp-imx#1: ffffffffb084f430 (trace_types_lock){+.+.}-{3:3}, at: rb_simple_write nxp-imx#2: ffffffffb02f5400 (cpu_hotplug_lock){++++}-{0:0}, at: start_per_cpu_kthreads Link: https://lkml.kernel.org/r/[email protected] Fixes: c8895e2 ("trace/osnoise: Support hotplug operations") Signed-off-by: Nico Pache <[email protected]> Acked-by: Daniel Bristot de Oliveira <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
sebastient
pushed a commit
to MaivinAI/linux-maivin
that referenced
this pull request
Aug 3, 2023
commit c3ed222 upstream. Send along the already-allocated fattr along with nfs4_fs_locations, and drop the memcpy of fattr. We end up growing two more allocations, but this fixes up a crash as: PID: 790 TASK: ffff88811b43c000 CPU: 0 COMMAND: "ls" #0 [ffffc90000857920] panic at ffffffff81b9bfde nxp-imx#1 [ffffc900008579c0] do_trap at ffffffff81023a9b nxp-imx#2 [ffffc90000857a10] do_error_trap at ffffffff81023b78 nxp-imx#3 [ffffc90000857a58] exc_stack_segment at ffffffff81be1f45 nxp-imx#4 [ffffc90000857a80] asm_exc_stack_segment at ffffffff81c009de nxp-imx#5 [ffffc90000857b08] nfs_lookup at ffffffffa0302322 [nfs] nxp-imx#6 [ffffc90000857b70] __lookup_slow at ffffffff813a4a5f nxp-imx#7 [ffffc90000857c60] walk_component at ffffffff813a86c4 nxp-imx#8 [ffffc90000857cb8] path_lookupat at ffffffff813a9553 nxp-imx#9 [ffffc90000857cf0] filename_lookup at ffffffff813ab86b Suggested-by: Trond Myklebust <[email protected]> Fixes: 9558a00 ("NFS: Remove the label from the nfs4_lookup_res struct") Signed-off-by: Benjamin Coddington <[email protected]> Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
sebastient
pushed a commit
to MaivinAI/linux-maivin
that referenced
this pull request
Aug 3, 2023
commit 4f40a5b upstream. This was missed in c3ed222 ("NFSv4: Fix free of uninitialized nfs4_label on referral lookup.") and causes a panic when mounting with '-o trunkdiscovery': PID: 1604 TASK: ffff93dac3520000 CPU: 3 COMMAND: "mount.nfs" #0 [ffffb79140f738f8] machine_kexec at ffffffffaec64bee nxp-imx#1 [ffffb79140f73950] __crash_kexec at ffffffffaeda67fd nxp-imx#2 [ffffb79140f73a18] crash_kexec at ffffffffaeda76ed nxp-imx#3 [ffffb79140f73a30] oops_end at ffffffffaec2658d nxp-imx#4 [ffffb79140f73a50] general_protection at ffffffffaf60111e [exception RIP: nfs_fattr_init+0x5] RIP: ffffffffc0c18265 RSP: ffffb79140f73b08 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff93dac304a800 RCX: 0000000000000000 RDX: ffffb79140f73bb0 RSI: ffff93dadc8cbb40 RDI: d03ee11cfaf6bd50 RBP: ffffb79140f73be8 R8: ffffffffc0691560 R9: 0000000000000006 R10: ffff93db3ffd3df8 R11: 0000000000000000 R12: ffff93dac4040000 R13: ffff93dac2848e00 R14: ffffb79140f73b60 R15: ffffb79140f73b30 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 nxp-imx#5 [ffffb79140f73b08] _nfs41_proc_get_locations at ffffffffc0c73d53 [nfsv4] nxp-imx#6 [ffffb79140f73bf0] nfs4_proc_get_locations at ffffffffc0c83e90 [nfsv4] nxp-imx#7 [ffffb79140f73c60] nfs4_discover_trunking at ffffffffc0c83fb7 [nfsv4] nxp-imx#8 [ffffb79140f73cd8] nfs_probe_fsinfo at ffffffffc0c0f95f [nfs] nxp-imx#9 [ffffb79140f73da0] nfs_probe_server at ffffffffc0c1026a [nfs] RIP: 00007f6254fce26e RSP: 00007ffc69496ac8 RFLAGS: 00000246 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6254fce26e RDX: 00005600220a82a0 RSI: 00005600220a64d0 RDI: 00005600220a6520 RBP: 00007ffc69496c50 R8: 00005600220a8710 R9: 003035322e323231 R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc69496c50 R13: 00005600220a8440 R14: 0000000000000010 R15: 0000560020650ef9 ORIG_RAX: 00000000000000a5 CS: 0033 SS: 002b Fixes: c3ed222 ("NFSv4: Fix free of uninitialized nfs4_label on referral lookup.") Signed-off-by: Scott Mayhew <[email protected]> Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.