MMDeploy安装笔记
MMDeploy的TensorRT教程
Step1: 创建虚拟环境并且安装MMDetection
conda create -n openmmlab python=3.7 -y
conda activate openmmlab
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y
# install mmcv
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8/index.html
# install mmdetection
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install -v -e .
Step2: 下载MMDetectin中训练好的权重
Download the checkpoint from this link and put it in the {MMDET_ROOT}/checkpoints where {MMDET_ROOT} is the root directory of your MMDetection codebase.
Step3: 下载安装MMDeploy
- 在anaconda中运行下列命令来安装MMDeploy
conda activate openmmlab
git clone https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
pip install -e . # 安装MMDeploy
Step4: Install TensorRT
install TensorRT through tar file
After installation, you’d better add TensorRT environment variables to bashrc by
cd /the/path/of/tensorrt/tar/gz/file
tar -zxvf TensorRT-8.2.3.0.Linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz
# 将下面的导入到 ~/.bashrc
export TENSORRT_DIR=$(pwd)/TensorRT-8.2.3.0
export LD_LIBRARY_PATH=$TENSORRT_DIR/lib:$LD_LIBRARY_PATH
Step5: Install cuDNN
install cudnn8.2 through tar file
Extract the compressed file and set the environment variables
cd /the/path/of/cudnn/tgz/file
tar -zxvf cudnn-11.3-linux-x64-v8.2.1.32.tgz
# 将下面的导入到 ~/.bashrc
export CUDNN_DIR=$(pwd)/cuda
export LD_LIBRARY_PATH=$CUDNN_DIR/lib64:$LD_LIBRARY_PATH
Step6: Build Model Converter
Step6-1: Build Custom Ops
- TensorRT Custom Ops
cd ${MMDEPLOY_DIR}
mkdir -p build && cd build
cmake -DCMAKE_CXX_COMPILER=g++-7 \
-DMMDEPLOY_TARGET_BACKENDS=trt \
-DTENSORRT_DIR=${TENSORRT_DIR} \
-DCUDNN_DIR=${CUDNN_DIR} ..
make -j$(nproc)
Step6-2: install Model Converter
cd ${MMDEPLOY_DIR}
pip install -e .
Step6-3: 验证模型是否能够进行转换
python ${MMDEPLOY_DIR}/tools/check_env.py
# 如果正常输出会得到:
# 2022-05-04 10:13:07,140 - mmdeploy - INFO - tensorrt: 8.2.3.0 ops_is_avaliable : True
Step6-4: Convert Model
- Once you have installed MMDeploy, you can convert the PyTorch model in the OpenMMLab model zoo to the backend model with one magic spell!
# Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
# If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.
python ${MMDEPLOY_DIR}/tools/deploy.py \
${MMDEPLOY_DIR}/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
${MMDET_DIR}/demo/demo.jpg \
--work-dir work_dirs \ # 转换好的模型保存目录
--device cuda:0 \ # 将cuda:0 更改成cuda???
--show \ # 展示使用后端推理框架,和原来pytorch推理的两张图片
--dump-info # 输出,可用与SDK
At the same time, an onnx model file end2end.onnx and ene2end.engine deploy.json detail.json pipeline.json (SDK config files) will generate on the work directory work_dirs.
Step6-5: Inference Model
- Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
from mmdeploy.apis import inference_model
deploy_cfg = "/home/zranguai/Deploy/MMDeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py"
model_cfg = "/home/zranguai/Deploy/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py"
backend_files = ["/home/zranguai/Deploy/MMDeploy/work_dirs/end2end.engine"]
img = "/home/zranguai/Deploy/mmdetection/demo/demo.jpg"
device = 'cuda:0'
result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
print(result)
Step6-6: Evaluate Model
- You might wonder that does the backend model have the same precision as the original one? How fast can the model run? MMDeploy provides tools to test the model.
python ${MMDEPLOY_DIR}/tools/test.py \
${MMDEPLOY_DIR}/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
--model /home/zranguai/Deploy/MMDeploy/work_dirs/end2end.engine \
--metrics "bbox" \
--device cuda:0
Step7: Build SDK
Step7-1: build MMDeploy SDK for TensorRT
注意: 30系显卡需要将pplcv安装到最新版本。参考issue
cd ${MMDEPLOY_DIR}
mkdir -p build && cd build
cmake -DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
-DMMDEPLOY_TARGET_BACKENDS=trt \
-Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl \ # pplcv到最新版本 A high-performance image processing library of openPPL. ref:https://mmdeploy.readthedocs.io/en/latest/build/linux.html#install-dependencies-for-sdk
-DTENSORRT_DIR=${TENSORRT_DIR} \
-DCUDNN_DIR=${CUDNN_DIR} \
-DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
-Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \
-DMMDEPLOY_CODEBASES=mmdet ..
make -j$(nproc) && make install
Step7-2: build demo
cd ${MMDEPLOY_DIR}/build/install/example
mkdir -p build && cd build
cmake -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
-DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
make object_detection
# suppress verbose logs
export SPDLOG_LEVEL=warn
# running the object detection example
./object_detection cuda ${work_dirs} ${path/to/an/image}
# 例子: ./object_detection cuda ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg
在Clion中调试代码
setting中设置:
CMake options:
-DMMDeploy_DIR=/home/zranguai/Deploy/MMDeploy/build/install/lib/cmake/MMDeploy -DTENSORRT_DIR=/home/zranguai/Deploy/Backend/TensorRT/TensorRT-8.2.3.0 -DCUDNN_DIR=/home/zranguai/Deploy/Backend/TensorRT/cuda
Build directory:
/home/zranguai/Deploy/MMDeploy/build/install
Build options:
object_detection
configuation:
cuda /home/zranguai/Deploy/MMDeploy/work_dirs /home/zranguai/Deploy/MMDeploy/demo/demo.jpg
+++++++++++++++++++我是分割线++++++++++++++
MMDeploy的onnxruntime教程
- 参考官方教程
Here is an example of how to deploy and inference Faster R-CNN model of MMDetection from scratch.
step1: 创建虚拟环境并且安装MMDetection
Create Virtual Environment and Install MMDetection.
Please run the following command in Anaconda environment to install MMDetection.
conda create -n openmmlab python=3.7 -y
conda activate openmmlab
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y
# install mmcv
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8/index.html
# install mmdetection
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install -v -e .
step2: 下载MMDetectin中训练好的权重
Download the Checkpoint of Faster R-CNN
Download the checkpoint from this link and put it in the {MMDET_ROOT}/checkpoints where {MMDET_ROOT} is the root directory of your MMDetection codebase.
step3: 安装MMDeploy和ONNX Runtime
Install MMDeploy and ONNX Runtime
step3-1: 安装MMDeploy
Please run the following command in Anaconda environment to install MMDeploy.
conda activate openmmlab
git clone https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
pip install -e . # 安装MMDeploy
step3-2a: 下载onnxruntime
Once we have installed the MMDeploy, we should select an inference engine for model inference. Here we take ONNX Runtime as an example. Run the following command to install ONNX Runtime:
pip install onnxruntime==1.8.1
Then download the ONNX Runtime library to build the mmdeploy plugin for ONNX Runtime:
step3-2b: 制作onnxruntime的插件(模型转换会需要)
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
cd onnxruntime-linux-x64-1.8.1
export ONNXRUNTIME_DIR=$(pwd)
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH # 也可将这两句写进~/.bashrc
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
mkdir -p build && cd build
# build ONNXRuntime custom ops
cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
make -j$(nproc)
step3-2c: build MMDeploy SDK(使用C的接口会用到)
# build MMDeploy SDK
cmake -DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \ # 这里的opencv安装可参考这里https://mmdeploy.readthedocs.io/en/latest/build/linux.html#install-dependencies-for-sdk
-Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \ # 这里的spdlog安装可参考这里https://mmdeploy.readthedocs.io/en/latest/build/linux.html#install-dependencies-for-sdk
-DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
-DMMDEPLOY_TARGET_BACKENDS=ort \
-DMMDEPLOY_CODEBASES=mmdet ..
make -j$(nproc) && make install
# build MMDeploy SDK具体案例
cmake -DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \ # 通过apt-get安装的
-Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \
-DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
-DMMDEPLOY_TARGET_BACKENDS=ort \
-DMMDEPLOY_CODEBASES=mmdet ..
# 其中${MMDEPLOY_DIR} ${MMDET_DIR} ${ONNXRUNTIME_DIR}都可以写在 ~/.bashrc里面然后source ~/.bashrc生效
补充: 验证后端和插件是否安装成功
python ${MMDEPLOY_DIR}/tools/check_env.py
step4: Model Conversion
Once we have installed MMDetection, MMDeploy, ONNX Runtime and built plugin for ONNX Runtime, we can convert the Faster R-CNN to a .onnx model file which can be received by ONNX Runtime. Run following commands to use our deploy tools:
# Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
# If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.
python ${MMDEPLOY_DIR}/tools/deploy.py \
${MMDEPLOY_DIR}/configs/mmdet/detection/detection_onnxruntime_dynamic.py \
${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
${MMDET_DIR}/demo/demo.jpg \
--work-dir work_dirs \ # 转换好的模型保存目录
--device cpu \
--show \ # 展示使用后端推理框架,和原来pytorch推理的两张图片
--dump-info # 输出方便,可用与SDK
# 补充
# ${MMDEPLOY_DIR}和${MMDET_DIR}已经写进了~/.bashrc
# 转换好了模型可以通过python接口进行推理
例如: Inference Model
Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
from mmdeploy.apis import inference_model
result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
If the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of ONNX Runtime and the second image is the result of PyTorch. At the same time, an onnx model file end2end.onnx and three json files (SDK config files) will generate on the work directory work_dirs.
step5: Run MMDeploy SDK demo
After model conversion, SDK Model is saved in directory ${work_dir}.
Here is a recipe for building & running object detection demo.
cd build/install/example
# path to onnxruntime ** libraries **
export LD_LIBRARY_PATH=/path/to/onnxruntime/lib
# 例子: export LD_LIBRARY_PATH=/home/zranguai/Deploy/Backend/ONNXRuntime/onnxruntime-linux-x64-1.8.1/lib
mkdir -p build && cd build
cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \
-DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
make object_detection
# 例子:
# cmake -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
# -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
# suppress verbose logs
export SPDLOG_LEVEL=warn
# running the object detection example
./object_detection cpu ${work_dirs} ${path/to/an/image}
# 例子: ./object_detection cpu ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg
If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects.
++++++++++++++++++++++++++++++++分割线++++++
MMDeploy的OpenVINO教程
Step1: 创建虚拟环境并且安装MMDetection
conda create -n openmmlab python=3.7 -y
conda activate openmmlab
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y
# install mmcv
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8/index.html
# install mmdetection
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install -v -e .
Step2: 下载MMDetectin中训练好的权重
Download the checkpoint from this link and put it in the {MMDET_ROOT}/checkpoints where {MMDET_ROOT} is the root directory of your MMDetection codebase.
Step3: 下载安装MMDeploy
- 在anaconda中运行下列命令来安装MMDeploy
conda activate openmmlab
git clone https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
pip install -e . # 安装MMDeploy
Step4: 下载OpenVINO
pip install openvino-dev
step4-1: 根据官网提示: 这里的openvino不需要custom ops
step4-2: 可选项: 下载用于使用OpenVINO的SDK
Optional. If you want to use OpenVINO in MMDeploy SDK, please install and configure it by following the guild
- OpenVINO安装
tar -xvzf l_openvino_toolkit_p_2020.4.287.tgz
cd l_openvino_toolkit_p_2020.4.287
sudo ./install_GUI.sh 一路next安装
cd /opt/intel/openvino/install_dependencies
sudo ./install_openvino_dependencies.sh
vi ~/.bashrc
- 把如下几行放置到 bashrc 文件尾
# set env for openvino
source /opt/intel/openvino_2021/bin/setupvars.sh # 注意找到是自己的路径
export INTEL_OPENVINO_DIR=/opt/intel/openvino_2021
export LD_LIBRARY_PATH=/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64
- source ~/.bashrc 激活环境
- 模型优化配置步骤
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
sudo ./install_prerequisites.sh # 可以只安装onnx的
step4-3: build MMDeploy SDK(openvino)
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
mkdir -p build && cd build
cmake -DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \
-Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \
-DInferenceEngine_DIR=${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/share \
-DMMDEPLOY_TARGET_BACKENDS=openvino \
-DMMDEPLOY_CODEBASES=mmdet ..
make -j$(nproc) && make install
# build MMDeploy SDK具体案例
cmake -DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \ # 这里设置apt-get下载的opencv
-Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \
-DInferenceEngine_DIR=${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/share \
-DMMDEPLOY_TARGET_BACKENDS=openvino \
-DMMDEPLOY_CODEBASES=mmdet ..
# ${INTEL_OPENVINO_DIR}写进了~/.bashrc
补充: 验证后端和插件是否安装成功(注意openvino不需要安装插件)
python ${MMDEPLOY_DIR}/tools/check_env.py
# 当把这个export LD_LIBRARY_PATH=/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64写进~/.bashrc里面时候,会导致出现libopencv_ml.so.4.5: cannot open shared object file: No such file or directory ??
Step5: Model Conversion(这一步也可以放在step4-1前面)
# Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
# If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.
python ${MMDEPLOY_DIR}/tools/deploy.py \
${MMDEPLOY_DIR}/configs/mmdet/detection/detection_openvino_dynamic-300x300.py \
${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
${MMDET_DIR}/demo/demo.jpg \
--work-dir work_dirs \ # 转换好的模型保存目录
--device cpu \
--show \ # 展示使用后端推理框架,和原来pytorch推理的两张图片
--dump-info # 输出,可用与SDK
# 补充
# ${MMDEPLOY_DIR}和${MMDET_DIR}已经写进了~/.bashrc
# 转换好了模型可以通过python接口进行推理
例如: Inference Model
Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
from mmdeploy.apis import inference_model
result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
例子:
deploy_cfg = "/home/zranguai/Deploy/MMDeploy/configs/mmdet/detection/detection_openvino_dynamic-300x300.py"
model_cfg = "/home/zranguai/Deploy/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py"
backend_files = [ "/home/zranguai/Deploy/MMDeploy/work_dirs/end2end.xml",]
img = "/home/zranguai/Deploy/mmdetection/demo/demo.jpg"
device = 'cpu'
from mmdeploy.apis import visualize_model
visualize_model(model_cfg, deploy_cfg, backend_files[0], img, device, show_result=True)
python ${MMDEPLOY_DIR}/tools/check_env.pyIf the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of OpenVINO and the second image is the result of PyTorch. At the same time, an onnx model file end2end.onnx and ene2end.bin(contains the weights and biases binary data) end2end.xml(describes the networks topology) end2end.mapping deploy.json detail.json pipeline.json (SDK config files) will generate on the work directory work_dirs.
Step6: Run MMDeploy SDK demo(for openvino)
After model conversion, SDK Model is saved in directory ${work_dir}.
Here is a recipe for building & running object detection demo.
cd build/install/example
# path to openvino ** libraries **
export LD_LIBRARY_PATH=/path/to/onnxruntime/lib/intel64
# 例子: export LD_LIBRARY_PATH=/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64
mkdir -p build && cd build
cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \ # openvino中opencv路径
-DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
make object_detection
# 例子2:
# cmake -DOpenCV_DIR=/opt/intel/openvino_2021/opencv/cmake \
# -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
# suppress verbose logs
export SPDLOG_LEVEL=warn
# running the object detection example
./object_detection cpu ${work_dirs} ${path/to/an/image}
# 例子: ./object_detection cpu ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg
# 可能出现的错误: 上面导出的xml的name: torch-jit-export version="11",解决: 重新安装就好了
If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects.
MMDeploy安装笔记的更多相关文章
- MonoDevelop 4.2.2/Mono 3.4.0 in CentOS 6.5 安装笔记
MonoDevelop 4.2.2/Mono 3.4.0 in CentOS 6.5 安装笔记 说明 以root账户登录Linux操作系统,注意:本文中的所有命令行前面的 #> 表示命令行提示符 ...
- 基于Ubuntu14.04系统的nvidia tesla K40驱动和cuda 7.5安装笔记
基于Ubuntu14.04系统的nvidia tesla K40驱动和cuda 7.5安装笔记 飞翔的蜘蛛人 注1:本人新手,文章中不准确的地方,欢迎批评指正 注2:知识储备应达到Linux入门级水平 ...
- sublime 安装笔记
sublime 安装笔记 下载地址 安装package control 根据版本复制相应的代码到console,运行 按要求重启几次后再按crtl+shift+p打开命令窗口 输入pcip即可开始安装 ...
- docker在ubuntu14.04下的安装笔记
本文主要是参考官网教程进行ubuntu14.04的安装. 下面是我的安装笔记. 笔记原件完整下载: 链接: https://pan.baidu.com/s/1dEPQ8mP 密码: gq2p
- ArchLinux 安装笔记:续 --zz
续前话 在虚拟机里调试了几天,终于鼓起勇气往实体机安装了,到桌面环境为止的安装过程可以看我的前一篇文章<ArchLinux 安装笔记>.桌面环境我使用的是 GNOME,虽然用了很长一段时间 ...
- Hadoop1.x与2.x安装笔记
Hadoop1.x与2.x安装笔记 Email: chujiaqiang229@163.com 2015-05-09 Hadoop 1.x 安装 Hadoop1.x 集群规划 No 名称 内容 备注 ...
- PHP7安装笔记
PHP7安装笔记 时间 -- :: 喵了个咪 原文 http://www.hdj.me/php7-install-note 主题 PHP # 安装mcrypt yum install -y php-m ...
- python 库安装笔记
python 库安装笔记 zoerywzhou@163.com http://www.cnblogs.com/swje/ 作者:Zhouwan 2017-2-22 友情提示 安装python库的过程中 ...
- 开始使用gentoo linux——gentoo安装笔记(下)
gentoo安装笔记(下) 上一章,已经对操作系统安装做了充分准备,并且已经从livecd(u盘系统)切换进入了gentoo安装环境中. 不过现在才是真正的开始!打起精神!这可不是在装ubuntu! ...
随机推荐
- Vuet.js规则详解,它是你不知道的强大功能?
Vuet.js是什么? Vuet.js是给Vue.js提供状态管理的一个工具,与vuex不同,它是一种崇尚规则定制的状态管理模式.事先将状态更新的规则写好,然后将规则注入到组件中,然后状态按照预订的规 ...
- 伪元素的margin值挤压主体元素解决
伪元素的margin值挤压主体元素解决 主体是两个p标签,需要再其左侧添加一个竖线,很常见的需求 目标 前提条件 1. 右侧的文字个数不固定 问题 1. 需要让before元素为`float:left ...
- java中将科学技术发转为正常数据
import java.text.NumberFormat; public class test { public static void main(String[] args) { double d ...
- ABP源码分析 - 约定注册(2)
比较随意,记录下过程,以便忘了以后重拾. 所谓约定注册是指不需要明确写代码注入,只需要按约定规则写服务类,框架自动完成服务注册. 例如,只要这样写,框架就会自动注册. public class Tax ...
- python---变量、常量、注释、基本数据类型
变量 变量:将运算的中间结果暂存到内存中,以便后续程序调用. 变量的命令规则: 变量由字母.数字.下划线组合而成. 不可以数字开头,更不能全是数字. 不能是python的关键字. 不要用中文. 名字要 ...
- 导出带标签的tar包(docker)-解决导出不带标签的麻烦
需求:在docker的本地镜像库中导出tar包给其他节点使用. 如果使用:docker save -o package.tar e82656a6fc 这样形式导出的tar包,安装之后标签会消失解决办法 ...
- MPU9250/MPU6050与运动数据处理与卡尔曼滤波(1)
第一篇--概述和MPU6050及其自带的DMP输出四元数 概述 InvenSense(国内一般译为应美盛)公司产的数字运动传感器在国内非常流行,我用过它的两款,9250和6050.出于被国产芯片惯坏的 ...
- 《手把手教你》系列基础篇(九十)-java+ selenium自动化测试-框架设计基础-Logback实现日志输出-中篇(详解教程)
1.简介 上一篇宏哥介绍是如何使用logback将日志输出到控制台中,但是如果需要发给相关人需要你拷贝出来,有时候由于控制台窗口的限制,有部分日志将会无法查看,因此我们还是需要将日志输出到文件中,因此 ...
- Kafka生成消息时的3种分区策略
摘要:KafkaProducer在发送消息的时候,需要指定发送到哪个分区, 那么这个分区策略都有哪些呢? 本文分享自华为云社区<Kafka生产者3中分区分配策略>,作者:石臻臻的杂货铺. ...
- 【生产事故调查】优化出来的bug-合并集合重复项
本来是要修复前一个代码bug,修复的过程中发现原本的代码又丑又长,复用性差(但是能用),出于强迫症忍不住的去优化,测试还不充分,火急火燎的发到生产了,结果掉井了!导致多个订单线下物流发货发多了.... ...