pve之daemon
pmxcfs
The Proxmox Cluster file system (“pmxcfs”) is a database-driven file system for storing configuration files, replicated in real time to all cluster nodes using corosync. We use this to store all PVE related configuration files.
Although the file system stores all data inside a persistent database on disk, a copy of the data resides in RAM. That imposes restriction on the maximum size, which is currently 30MB. This is still enough to store the configuration of several thousand virtual machines.
We use the Corosync Cluster Engine for cluster communication, and SQlite for the database file. The file system is implemented in user space using FUSE.
The file system is mounted at:
/etc/pve This service is usually started and managed using systemd toolset. The service is called pve-cluster. root@cu-pve04:~# systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-04-30 21:20:23 CST; 1 weeks 0 days ago
Main PID: 3745 (pmxcfs)
Tasks: 13 (limit: 17203)
Memory: 88.9M
CPU: 25min 56.856s
CGroup: /system.slice/pve-cluster.service
└─3745 /usr/bin/pmxcfs May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: received all states
May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: leader is 1/3745
May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: synced members: 1/3745, 3/3878
May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: start sending inode updates
May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: sent all (3) updates
May 08 15:39:22 cu-pve04 pmxcfs[3745]: [dcdb] notice: all data is up to date
May 08 15:39:22 cu-pve04 pmxcfs[3745]: [status] notice: received sync request (epoch 1/3745/0000000F)
May 08 15:39:22 cu-pve04 pmxcfs[3745]: [status] notice: received all states
May 08 15:39:22 cu-pve04 pmxcfs[3745]: [status] notice: all data is up to date
May 08 15:39:23 cu-pve04 pmxcfs[3745]: [status] notice: received log root@cu-pve04:/var/lib/pve-cluster# ls -l
total 4136
-rw------- 1 root root 77824 May 8 15:17 config.db
-rw------- 1 root root 32768 May 8 15:18 config.db-shm
-rw------- 1 root root 4124152 May 8 15:18 config.db-wal
root@cu-pve04:/var/lib/pve-cluster# file *
config.db: SQLite 3.x database, last written using SQLite version 3016002
config.db-shm: data
config.db-wal: SQLite Write-Ahead Log, version 3007000
root@cu-pve04:/etc/pve# ls -l
total 5
-rw-r----- 1 root www-data 451 Apr 30 14:23 authkey.pub
-rw-r----- 1 root www-data 881 Apr 30 20:47 ceph.conf
-rw-r----- 1 root www-data 545 Apr 30 17:22 corosync.conf
-rw-r----- 1 root www-data 16 Apr 30 14:09 datacenter.cfg
-rw-r----- 1 root www-data 2057 Apr 30 14:23 pve-root-ca.pem
-rw-r----- 1 root www-data 1675 Apr 30 14:23 pve-www.key
-rw-r----- 1 root www-data 177 May 7 17:54 storage.cfg
-rw-r----- 1 root www-data 66 May 4 20:38 user.cfg
-rw-r----- 1 root www-data 119 Apr 30 14:23 vzdump.cron drwxr-xr-x 2 root www-data 0 Apr 30 14:23 nodes
drwx------ 2 root www-data 0 Apr 30 14:23 priv lrwxr-xr-x 1 root www-data 0 Jan 1 1970 local -> nodes/cu-pve04
lrwxr-xr-x 1 root www-data 0 Jan 1 1970 lxc -> nodes/cu-pve04/lxc
lrwxr-xr-x 1 root www-data 0 Jan 1 1970 openvz -> nodes/cu-pve04/openvz
lrwxr-xr-x 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/cu-pve04/qemu-server ------------------------------------------------------------ 虚拟机包括100.conf和images即100.disk两种文件 The /etc/pve/qemu-server/<VMID>.conf files stores VM configuration, where "VMID" is the numeric ID of the given VM. One can use the qm command to generate and modify those files. root@cu-pve04:/etc/pve/nodes# ls -R
.:
cu-pve04 cu-pve05 cu-pve06 ./cu-pve04:
lrm_status lxc openvz priv pve-ssl.key pve-ssl.pem qemu-server ./cu-pve04/lxc: ./cu-pve04/openvz: ./cu-pve04/priv: ./cu-pve04/qemu-server:
100.conf 101.conf 102.conf 103.conf 105.conf 106.conf 107.conf ./cu-pve05:
lrm_status lxc openvz priv pve-ssl.key pve-ssl.pem qemu-server ./cu-pve05/lxc: ./cu-pve05/openvz: ./cu-pve05/priv: ./cu-pve05/qemu-server:
104.conf 108.conf ./cu-pve06:
lrm_status lxc openvz priv pve-ssl.key pve-ssl.pem qemu-server ./cu-pve06/lxc: ./cu-pve06/openvz: ./cu-pve06/priv: ./cu-pve06/qemu-server:
109.conf
pvedaemon - PVE API Daemon
This daemon exposes the whole Proxmox VE API on 127.0.0.1:85. It runs as root and has permission to do all privileged operations.
The daemon listens to a local address only, so you cannot access it from outside. The pveproxy daemon exposes the API to the outside world.
root@cu-pve04:~# pvedaemon status
running root@cu-pve04:~# ss -lntp|grep proxy
LISTEN 0 128 0.0.0.0:3128 0.0.0.0:* users:(("spiceproxy work",pid=2195573,fd=6),("spiceproxy",pid=7541,fd=6))
LISTEN 0 128 0.0.0.0:8006 0.0.0.0:* users:(("pveproxy worker",pid=2310656,fd=6),("pveproxy worker",pid=2305989,fd=6),("pveproxy worker",pid=2295860,fd=6),("pveproxy",pid=7522,fd=6))
root@cu-pve04:~# ss -lntp|grep daemon
LISTEN 0 128 127.0.0.1:85 0.0.0.0:* users:(("pvedaemon worke",pid=2252583,fd=6),("pvedaemon worke",pid=2250382,fd=6),("pvedaemon worke",pid=2250172,fd=6),("pvedaemon",pid=4500,fd=6))
--------------------------------------------------------
pveproxy - PVE API Proxy Daemon
This daemon exposes the whole Proxmox VE API on TCP port 8006 using HTTPS. It runs as user www-data and has very limited permissions. Operation requiring more permissions are forwarded to the local pvedaemon.
Requests targeted for other nodes are automatically forwarded to those nodes. This means that you can manage your whole cluster by connecting to a single Proxmox VE node.
It is possible to configure “apache2”-like access control lists. Values are read from file /etc/default/pveproxy.
root@cu-pve04:~# pveproxy status
running
/etc/defaults/
-----------------------------------------------------------------------
pvestatd - PVE Status Daemon
This daemon queries the status of VMs, storages and containers at regular intervals. The result is sent to all nodes in the cluster.
root@cu-pve04:/etc/default# pvestatd status
running
----------------------------------------------------------------------
qmeventd - PVE Qemu Eventd Daemon
This service is usually started and managed using systemd toolset. The service is called qmeventd.
/usr/sbin/qm cleanup
systemctl status qmeventd root@cu-pve04:/etc/default# systemctl status qmeventd
● qmeventd.service - PVE Qemu Event Daemon
Loaded: loaded (/lib/systemd/system/qmeventd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-04-30 21:20:19 CST; 1 weeks 0 days ago
Main PID: 2829 (qmeventd)
Tasks: 1 (limit: 17203)
Memory: 8.8M
CPU: 10.100s
CGroup: /system.slice/qmeventd.service
└─2829 /usr/sbin/qmeventd /var/run/qmeventd.sock May 07 13:47:13 cu-pve04 qmeventd[2807]: OK
May 07 13:47:13 cu-pve04 qmeventd[2807]: Finished cleanup for 103
May 08 15:01:40 cu-pve04 qmeventd[2807]: file /etc/pve/storage.cfg line 4 (section 'local') - ignore config line: cephfs: kycfs
May 08 15:01:40 cu-pve04 qmeventd[2807]: file /etc/pve/storage.cfg line 5 (section 'local') - unable to parse value of 'path': duplicate attribute
May 08 15:01:40 cu-pve04 qmeventd[2807]: file /etc/pve/storage.cfg line 6 (section 'local') - unable to parse value of 'content': duplicate attribute
May 08 15:01:40 cu-pve04 qmeventd[2807]: Starting cleanup for 107
May 08 15:01:40 cu-pve04 qmeventd[2807]: trying to acquire lock...
May 08 15:01:41 cu-pve04 qmeventd[2807]: OK
May 08 15:01:41 cu-pve04 qmeventd[2807]: storage 'kycfs' does not exists
May 08 15:01:41 cu-pve04 qmeventd[2807]: Finished cleanup for 107
------------------------------------------------------
pve-ha-lrm - PVE Local Resource Manager Daemon
root@cu-pve05:~# pve-ha-lrm status
running
------------------------------------------
pve-ha-crm - PVE Cluster Resource Manager Daemon
root@cu-pve05:~# pve-ha-crm status
running
------------------------------------------
spiceproxy - SPICE Proxy Service
root@cu-pve04:~# spiceproxy status
running
This daemon listens on TCP port 3128, and implements an HTTP proxy to forward CONNECT request from the SPICE client to the correct Proxmox VE VM. It runs as user www-data and has very limited permissions.
It is possible to configure "apache2" like access control lists. Values are read from file /etc/default/pveproxy.
------------------------------------------
pve-firewall - PVE Firewall Daemon
All firewall related configuration is stored on the proxmox cluster file system. So those files are automatically distributed to all cluster nodes, and the pve-firewall service updates the underlying iptables rules automatically on changes.
You can configure anything using the GUI (i.e. Datacenter → Firewall, or on a Node → Firewall), or you can edit the configuration files directly using your preferred editor.
If you enable the firewall, traffic to all hosts is blocked by default. Only exceptions is WebGUI(8006) and ssh(22) from your local network.
Each virtual network device has its own firewall enable flag. So you can selectively enable the firewall for each interface. This is required in addition to the general firewall enable option.
The firewall runs two service daemons on each node:
pvefw-logger: NFLOG daemon (ulogd replacement).
pve-firewall: updates iptables rules
root@cu-pve04:~# pve-firewall compile
ipset cmdlist:
iptables cmdlist:
ip6tables cmdlist:
ebtables cmdlist:
no changes
firewall disabled
root@cu-pve04:~# pve-firewall localnet
local hostname: cu-pve04
local IP address: 192.168.1.4
network auto detect: 192.168.1.0/24
using detected local_network: 192.168.1.0/24
root@cu-pve04:~# pve-firewall status
Status: disabled/running
root@cu-pve04:~# pve-firewall stop
root@cu-pve04:~# pve-firewall status
Status: disabled/stopped
----------------------------------------------
root@cu-pve04:~# ps -ef|grep pve
root 2697 1 0 16:26 ? 00:00:00 /usr/sbin/pvefw-logger
ceph 4418 1 2 16:26 ? 00:00:28 /usr/bin/ceph-mgr -f --cluster ceph --id cu-pve04 --setuser ceph --setgroup ceph
ceph 4421 1 0 16:26 ? 00:00:02 /usr/bin/ceph-mds -f --cluster ceph --id cu-pve04 --setuser ceph --setgroup ceph
ceph 4438 1 2 16:26 ? 00:00:33 /usr/bin/ceph-mon -f --cluster ceph --id cu-pve04 --setuser ceph --setgroup ceph
root 4739 1 0 16:26 ? 00:00:09 pvestatd
root 5028 1 0 16:26 ? 00:00:00 pvedaemon
root 5031 5028 0 16:26 ? 00:00:00 pvedaemon worker
root 5032 5028 0 16:26 ? 00:00:00 pvedaemon worker
root 5033 5028 0 16:26 ? 00:00:00 pvedaemon worker
root 5073 1 0 16:26 ? 00:00:00 pve-ha-crm
www-data 5093 1 0 16:26 ? 00:00:00 pveproxy
www-data 5096 5093 0 16:26 ? 00:00:01 pveproxy worker
www-data 5097 5093 0 16:26 ? 00:00:03 pveproxy worker
www-data 5098 5093 0 16:26 ? 00:00:04 pveproxy worker
root 5118 1 0 16:26 ? 00:00:00 pve-ha-lrm
root 8959 1 0 16:39 ? 00:00:01 pve-firewall
Ports used by Proxmox VE
Web interface: 8006
VNC Web console: 5900-5999
SPICE proxy: 3128
sshd (used for cluster actions): 22
rpcbind: 111
corosync multicast (if you run a cluster): 5404, 5405 UDP
pve之daemon的更多相关文章
- init.d functions的daemon函数
daemon函数说明 # 该函数的作用是启动一个可执行的二进制程序: # 使用方法: # .daemon {--check program|--check=program} [--user usern ...
- teamviewer "TeamViewer Daemon is not running
执行下面的命令问题解决: # teamviewer --daemon enable enable Sat Jan :: CST Action: Installing daemon () for 'up ...
- linux系统编程之进程(八):守护进程详解及创建,daemon()使用
一,守护进程概述 Linux Daemon(守护进程)是运行在后台的一种特殊进程.它独立于控制终端并且周期性地执行某种任务或等待处理某些发生的事件.它不需要用户输入就能运行而且提供某种服务,不是对整个 ...
- CHECK_NRPE: Received 0 bytes from daemon. Check the remote server logs for error messages.
今天,在用icinga服务器端测试客户端脚本时,报如下错误: [root@mysql-server1 etc]# /usr/local/icinga/libexec/check_nrpe -H 192 ...
- ubuntu super daemon设置
super daemon是一个在Linux下面全面管理自己服务设置的东东,他可以接管很多服务的设定,只需要在/etc/xinetd.d/下面放置好自己的配置文件就可以了,那么,具体应该怎么配置呢? ...
- 【转】关于Java的Daemon线程的理解
原文地址:http://www.cnblogs.com/ChrisWang/archive/2009/11/28/1612815.html 关于Java的Daemon线程的理解 网上对Java的Dae ...
- linux下的守护进程daemon
什么是守护进程?其实感觉守护进程并没有什么明确的定义,只是守护进程有一些特征,这是它需要遵循的. 守护进程的第一个特征是长时间在后台运行的程序,并且主要是为了提供某种服务,而为了能够让服务尽可能随时都 ...
- java并发编程学习: 守护线程(Daemon Thread)
在正式理解这个概念前,先把 守护线程 与 守护进程 这二个极其相似的说法区分开,守护进程通常是为了防止某些应用因各种意外原因退出,而在后台独立运行的系统服务或应用程序. 比如:我们开发了一个邮件发送程 ...
- Another MySQL daemon already running with the same unix socket的解决
问题出现: 每周一需要备份一次数据库,即从服务器MySQL导出sql文件,再导入到我机器上虚拟机的MySQL里.但是今天早上连不上,我进入控制台用#service mysqld start强行启动,报 ...
随机推荐
- run_jetty_run插件安装
eclipse安装run_jetty_run不能使用在线模式,因为Google等网站已经被屏蔽,不能访问.要先下载jar包,本地安装.
- 模块内高内聚?模块间低耦合?MVC+EF演示给你看!
前言 在软件项目开发过程中,我们总能听见“高内聚,低耦合”,即使这种思想在我们学习编程的过程中就已经耳濡目染.可一旦当我们上项目,赶进度的时候我们就会“偷懒”,能省时间就省.管他什么设计模式,什么软件 ...
- JavaSE编码试题强化练习7
1.编写应用程序,创建类的对象,分别设置圆的半径.圆柱体的高,计算并分别显示圆半径.圆面积.圆周长,圆柱体的体积. /** * 圆类 */ public class Circle { /** * 类属 ...
- 多线程11-AutoResetEvent
); Console.WriteLine()); t.Start(); Console.WriteLine()); ...
- display:table的几个用法 块级子元素垂直居中
DIV+CSS的布局已经让表格布局几乎很少用到,除非表格语义性很强的情况. display:table解决了一部分需要使用表格特性但又不需要表格语义的情况, 尤其是DIV+CSS很不方便解决的问题,比 ...
- python Calendar 模块导入及用法
Calendar 是python 日历模块,此模块的函数都是日历相关的,例如打印某月的字符月历,星期之类的模块,下面剖析python Calendar 模块导入及用法. 1,python导入日历模块 ...
- jQuery淡入淡出瀑布流效果
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/ ...
- js数组中的引用类型
我们看一下这个例子: let a={tile:'深复制'}; let b=a; a.title='浅复制'; 那么我们会获得两个对象,一个a,一个b,a的title是浅复制,b的title是深复制.但 ...
- 关键字static介绍
static关键字 java中针对多个对象有共同的成员变量值得时候,就提供了static关键字来修饰. (1)静态的意思.可以修饰成员变量和成员方法. (2)静态的特点: A:随着类的加载而加载 B: ...
- 【记录】@Transactional
参考链接:https://blog.csdn.net/nextyu/article/details/78669997 参考链接:https://www.xuebuyuan.com/3222458.ht ...