http://blog.kreyolys.com/2011/03/17/no-panic-its-just-a-kernel-panic/

One of the main young sysadmin fear is to being asked by management to find out the root cause of a system crash or hang!

When realizing that there are no error messages in the logs and no obvious pattern related to high load, high I/O activity or memory exhaustion… it can be hard to come up with a relevant root cause (especially when it happens randomly once a year).

Experienced sysadmins and technical support engineers know the deal though, there are ways to be proactive about this and get relevant clues about what happened during a crash/hang “most of the time”.

One common answer to this issue is to use kexec/kdump allowing the system to dump memory contents(vmcore) after kernel panics.

In the case of a hang, you would need to press a specific set of “magic keys” to generate the vmcore.

***

Install Software
Install the RPMs “kexec-tools”, “crash”, “kernel-debuginfo” and “kernel-debuginfo-common”.

Kernel Options to specify
Add the “crashkernel” option to your kernel line in grub.conf
Just “crashkernel=128M” for RHEL6, but “crashkernel=128M@16M” for RHEL5
Ex in RHEL5:
kernel /boot/vmlinuz-2.6.17-1.2519.4.21.el5 ro root=LABEL=/ rhgb quiet crashkernel=128M@16M

Kdump configuration
Specify the vmcore location in /etc/kdump.conf

Ex for dumping to a device:
raw /var/crash
Ex for dumping to a file:
ext3 /dev/sdb2 (mount and generate the vmcore in /var/crash)
Ex for dumping on the network with NFS:
net nfs.server.com:/remote/export/vmcores
Ex for dumping on the network with SSH:
net user@remote.server.com + propagate with “/etc/init.d/kdump propagate”

/!\ Make sure that the space available at the specified vmcore location match the size of the physical memory if you have a doubt. There are vmcore compression options available but the best way to determine the size of the generated vmcore is to test.. by crashing your server (see SysRq).

Page selection and compression
Discarding ‘useless’ memory pages and compressing the rest can be done with the core_collector command specified in the kdump.conf.

The core collector “makedumpfile” allows you to see the type of pages:
zero pages = 1
cache pages = 2
cache private = 4
user pages = 8
free pages = 16

-d 31 is used to throw out pages, -c is used for compression.

# throw out zero pages (containing no data)
# core_collector makedumpfile -d 1
# throw out all trival pages
# core_collector makedumpfile -d 31
# compress all pages, but leave them all
# core_collector makedumpfile -c
# throw out trival pages and compress (recommended)
core_collector makedumpfile -d 31 -c

A restart of the kdump service is necessary after modifying the configuration file:

/etc/init.d/kdump restart

Setup the system to use kdump with systems lockups (NMI) and OOM (Out Of Memory) scenarios

- Append kernel option “nmi_watchdog=1” in grub.conf

– Add following kernel parameters in sysctl:

- kernel.unknown_nmi_panic=1

- kernel.panic_on_unrecovered_nmi=1

- vm.panic_on_oom=1

Test by crashing the system intentionally

Enable SysRq if not done yet:

echo 1 > /proc/sys/kernel/sysrq

Then, crash the system:

echo “c” > /proc/sysrq-trigger

Those are your best friends when your system hangs (no mouse, no keyboard, frozen screen… I’m sure you’ve been there once or twice).

Basically, it’s a set of key combination that you hit in that situation to generate a memory dump.

In a nutshell, this is how to enable them:

# echo 1 > /proc/sys/kernel/sysrq

Or in sysctl.conf:

kernel.sysrq=1

How to trigger a SysRq event during a hang:

Alt+PrintScreen+(commandKey)

Or intentionally:

echo “commandKey” > /proc/sysrq-trigger

CommandKey List:

m – memory allocation

t – thread state

p – CPU registers and flags

c – CRASH the system

s – sync all mounted filesystems

u – remount all filesystems read-only

b – reboot the machine

o – power off the machine

Generate a vmcore to analyse soft lockups.

When your system is subject to soft lockups:

BUG: soft lockup - CPU#3 stuck for 11s! [frob:2342]

Make sure that kdump is correctly setup and then:

# echo 1 > /proc/sys/kernel/softlockup_panic

You should have a vmcore generated after that.

Vmcore in Virtual Machines (KVM based)

When one of your guest OS is hanging, you can simply generate the dump with the following command:

virsh dump domain-name /tmp/dumpfile

This is when the VM hangs, for kernel panic, install kdump on the VM as you would do for a physical machine.

Ok, we’ve seen how to generate a vmcore from a kernel panic or a hung system situation.

But you’re not ready yet to provide the root cause to your management, and god knows they want to know why the server “you” are administrating is crashing in production.

The analysis of the vmcore is the answer.

A basic analysis presented here can help you finding out what process crashed the server – that can make management be quiet for a while.

A deeper analysis which can be done by kernel aficionados is more focused on finding out the piece of code from the program executing the process which crashed the system.

What’s needed to analyse the vmcore:

- the “crash” utility

- the “kernel-debuginfo” and “kernel-debuginfo-common” packages for your kernel version

- a vmcore (can be handy)

How to use it:

# crash /usr/lib/debug/lib/modules/2.6.18-194.17.4.el5/vmlinux /var/crash/127.0.0.1-2011-03-16-12\:23\:06/vmcore

Option available:

crash> sys

crash> bt -a

crash> mod

crash> log

crash> sys

Partition /var/crash description:

# df -h /var/crash/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/system-varcrash
372M 11M 343M 3% /var/crash

Necessary utilities (do not forget kernel-debuginfo):

# rpm -q kexec-tools crash
kexec-tools-1.102pre-96.el5_5.4
crash-4.1.2-4.el5_5.1

Kdump.conf configuration (/var/crash specified and pages selection/compression):

#cat /etc/kdump.conf

#kernel crash dump conf
# Only dump the pages we need
core_collector makedumpfile -d 31 -c # Save all vmcores to / (relative to the specified LV below)
path / ## Mount the /var/crash LV
ext3 /dev/system/varcrash

Add kernel crashkernel option:

# grep crash /boot/grub/grub.conf
kernel /vmlinuz-2.6.18-194.17.4.el5 ro root=/dev/system/root rhgb quiet crashkernel=128M@16M

Checking that SysRq is enabled:

# cat /proc/sys/kernel/sysrq
1

Let’s provoke an intentional crash:

# echo "c" > /proc/sysrq-trigger

After Rebooting the crashed system, a small vmcore is available:

# ls -shl /var/crash/127.0.0.1-2011-03-16-12:23:06
total 11M
11M -rw------- 1 root root 11M Mar 16 12:23 vmcore

VMcore Analysis with “crash”:

crash /usr/lib/debug/lib/modules/2.6.18-194.17.4.el5/vmlinux /var/crash/127.0.0.1-2011-03-16-12\:23\:06/vmcore

KERNEL: /usr/lib/debug/lib/modules/2.6.18-194.17.4.el5/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2011-03-16-12:23:06/vmcore [PARTIAL DUMP]
CPUS: 2
DATE: Wed Mar 16 12:23:01 2011
UPTIME: 00:09:23
LOAD AVERAGE: 0.00, 0.02, 0.00
TASKS: 132
NODENAME: sandbox3
RELEASE: 2.6.18-194.17.4.el5
VERSION: #1 SMP Wed Oct 20 13:03:08 EDT 2010
MACHINE: x86_64 (2926 Mhz)
MEMORY: 2 GB
PANIC: "SysRq : Trigger a crashdump"
PID: 2303
COMMAND: "bash"
TASK: ffff81007fa160c0 [THREAD_INFO: ffff81006f5e8000]
CPU: 0
STATE: TASK_RUNNING (SYSRQ)

Once in the crash environment, you can explore different system information related to the crash.

The logs (message dump)

crash > logs
....
hdc: drive_cmd: error=0x04 { AbortedCommand }
ide: failed opcode was: 0xec
SysRq : Trigger a crashdump

The System data

crash > sys
KERNEL: /usr/lib/debug/lib/modules/2.6.18-194.17.4.el5/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2011-03-16-12:23:06/vmcore [PARTIAL DUMP]
CPUS: 2
DATE: Wed Mar 16 12:23:01 2011
UPTIME: 00:09:23
LOAD AVERAGE: 0.00, 0.02, 0.00
TASKS: 132
NODENAME: sandbox3
RELEASE: 2.6.18-194.17.4.el5
VERSION: #1 SMP Wed Oct 20 13:03:08 EDT 2010
MACHINE: x86_64 (2926 Mhz)
MEMORY: 2 GB
PANIC: "SysRq : Trigger a crashdump"

The kernel stack backtrace

crash> bt
PID: 2303 TASK: ffff81007fa160c0 CPU: 0 COMMAND: "bash"
#0 [ffff81006f5e9df0] crash_kexec at ffffffff800ad9ce
#1 [ffff81006f5e9eb0] sysrq_handle_crashdump at ffffffff801b4dcd
#2 [ffff81006f5e9ec0] __handle_sysrq at ffffffff801b4bc0
#3 [ffff81006f5e9f00] write_sysrq_trigger at ffffffff80109683
#4 [ffff81006f5e9f10] vfs_write at ffffffff80016aa6
#5 [ffff81006f5e9f40] sys_write at ffffffff80017373
#6 [ffff81006f5e9f80] tracesys at ffffffff8005d28d (via system_call)
RIP: 0000003edccc62c0 RSP: 00007fffa24ddcf8 RFLAGS: 00000246
RAX: ffffffffffffffda RBX: ffffffff8005d28d RCX: ffffffffffffffff
RDX: 0000000000000002 RSI: 00002b53c7ed1000 RDI: 0000000000000001
RBP: 0000000000000002 R8: 00000000ffffffff R9: 00002b53c46c2dd0
R10: 0000000000000013 R11: 0000000000000246 R12: 0000003edcf51780
R13: 00002b53c7ed1000 R14: 0000000000000002 R15: 0000000000000000
ORIG_RAX: 0000000000000001 CS: 0033 SS: 002b

The modules information:

MODULE       NAME               SIZE  OBJECT FILE
ffffffff88008000 ehci_hcd 66125 (not loaded) [CONFIG_KALLSYMS]
ffffffff88017980 ohci_hcd 56309 (not loaded) [CONFIG_KALLSYMS]
ffffffff88026e00 uhci_hcd 57433 (not loaded) [CONFIG_KALLSYMS]
ffffffff8803ff80 jbd 94769 (not loaded) [CONFIG_KALLSYMS]
ffffffff8806b180 ext3 168913 (not loaded) [CONFIG_KALLSYMS]
ffffffff88076780 virtio 39365 (not loaded) [CONFIG_KALLSYMS]

This is it for now, the “bt” and “sys” crash commands should be enough to find out about the guilty process.

Troubleshooting the code via the backtrace syscalls and correcting a potential bug is another challenge implying coder skills and a broad kernel internals knowledge.

Don’t panic, it’s just a kernel panic (ZT)的更多相关文章

  1. CentOS系统Kernel panic - not syncing: Attempted to kill init

    结果启动虚拟机出现如下问题: Kernel panic - not syncing: Attempted to kill init     解决方法: 系统启动的时候,按下'e'键进入grub编辑界面 ...

  2. kernel/panic.c

    /* *  linux/kernel/panic.c * *  Copyright (C) 1991, 1992  Linus Torvalds */ /* * This function is us ...

  3. Kernel Panic常见原因以及解决方法

    Technorati 标签: Kernel Panic 出现原因 1. Linux在中断处理程序中,它不处于任何一个进程上下文,如果使用可能睡眠的函数,则系统调度会被破坏,导致kernel panic ...

  4. linux启动报错:kernel panic - not attempted to kill init

    系统类型:CentOS 6.4(x64) 启动提示:Kernel panic - not syncing: Attempted to kill init 解决办法: 系统启动的时候,按下‘e’键进入g ...

  5. LFS:kernel panic VFS: Unable to mount root fs

    说明: 使用Vm虚拟机构建自己的LFS系统时,系统引导不成功,提示 kernel panic VFS: Unable to mount root fs 参考链接:http://www.52os.net ...

  6. 深入 kernel panic 流程【转】

    一.前言 我们在项目开发过程中,很多时候会出现由于某种原因经常会导致手机系统死机重启的情况(重启分Android重启跟kernel重启,而我们这里只讨论kernel重启也就是 kernel panic ...

  7. 挂载文件系统出现"kernel panic..." 史上最全解决方案

    问:挂载自己制作的文件系统卡在这里: NET: Registered protocol family 1 NET: Registered protocol family 17 VFS: Mounted ...

  8. 关于call_rcu在内核模块退出时可能引起kernel panic的问题

    http://paulmck.livejournal.com/7314.html RCU的作者,paul在他的blog中有提到这个问题,也明确提到需要在module exit的地方使用rcu_barr ...

  9. Virtual Machine Kernel Panic : Not Syncing : VFS : Unable To Mount Root FS On Unknown-Block (0,0)

    Virtual Machine Kernel Panic : Not Syncing : VFS : Unable To Mount Root FS On Unknown-Block (0,0) 33 ...

随机推荐

  1. 写2个线程,一个打印1-52,一个打印A-Z,打印顺序是12A34B。。。(采用同步代码块和同步方法两种同步方法)

    1.同步方法 package Synchronized; /************************************同步方法****************************** ...

  2. alibaba的JSON.toString会把值为null的字段去掉,谨记

    alibaba的JSON.toString会把值为null的字段去掉,谨记 Map<String,Object> map = new HashMap<>(); map.put( ...

  3. Virtio SCSI设备介绍

    Qemu的存储栈 在KVM虚拟化环境中,当客户机的内核存储系统像在物理机上一样通过页缓存.文件系统.通用块设备层运行到实际设备驱动时,这时驱动对设备寄存器的访问会触发CPU从客户机代码切换到物理机内的 ...

  4. Redis源码研究:哈希表 - 蕫的博客

    [http://dongxicheng.org/nosql/redis-code-hashtable/] 1. Redis中的哈希表 前面提到Redis是个key/value存储系统,学过数据结构的人 ...

  5. SSIS简介

    SSIS 其全称是Sql Server Integration Services ,是Microsoft BI 解决方案的一大利器. SSIS 的体系结构主要由四部分组成:Integration Se ...

  6. QBZT Day3(zhx ak IOI)

    动态规划 DP和前几天学的东西不大一样,动态规划和数据结构相比是一个非常抽象的东西 先来看看斐波那契数列 定义是F0=0,F1=1,Fn=F(n-1)+F(n-2) 0,1,1,2,3,5,8,13, ...

  7. javascript常用的数组操作

    数组的定义 var arr=new Array(); var arr=[]; var arr=new Array(10);//定义一个长度为10的数组 数组元素的访问 var temp=arr[1]; ...

  8. SCM-MANAGER 应用

    什么是SCM-MANAGER 基于Web的,集成了  Git. Mercurial .Subversion  多种代码管理工具的源代码管理平台 它有什么优点 简易安装 不需要破解配置文件,完全可配置的 ...

  9. CCTextFieldTTF 与 5种常用CCMenuItem

    //继承(class HelloWorld : public cocos2d::CCLayer, public cocos2d::CCTextFieldDelegate) CCTextFieldTTF ...

  10. [Luogu4177][CEOI2008]order

    luogu sol 这题有点像网络流24题里面的太空飞行计划啊. 最大收益=总收益-最小损失. 先令\(ans=\sum\)任务收益. 源点向每个任务连容量为收益的边. 每个机器向汇点连容量为购买费用 ...