R︱mlr包帮你挑选最适合数据的机器学习模型(分类、回归)+机器学习python和R互查手册
一、R语言的mlr packages
install.packages("mlr")之后就可以看到R里面有哪些机器学习算法、在哪个包里面。
a<-listLearners()
这个包是听CDA网络课程《R语言与机器学习实战》余文华老师所述,感觉很棒,有待以后深入探讨。以下表格是R语言里面,52个机器学习算法的来源以及一些数据要求。
| class | name | short.name | package | note | type | installed | numerics | factors | ordered | missings | weights | prob | oneclass | twoclass | multiclass | class.weights | se | lcens | rcens | icens | |
| 1 | classif.avNNet | Neural Network | avNNet | nnet | `size` has been set to `3` by default. Doing bagging training of `nnet` if set `bag = TRUE`. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 2 | classif.binomial | Binomial Regression | binomial | stats | Delegates to `glm` with freely choosable binomial link function via learner parameter `link`. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 3 | classif.C50 | C50 | C50 | C50 | classif | TRUE | TRUE | TRUE | FALSE | TRUE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 4 | classif.cforest | Random forest based on conditional inference trees | cforest | party | See `?ctree_control` for possible breakage for nominal features with missingness. | classif | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 5 | classif.ctree | Conditional Inference Trees | ctree | party | See `?ctree_control` for possible breakage for nominal features with missingness. | classif | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 6 | classif.cvglmnet | GLM with Lasso or Elasticnet Regularization (Cross Validated Lambda) | cvglmnet | glmnet | The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise. Factors automatically get converted to dummy columns, ordered factors to integer. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 7 | classif.gausspr | Gaussian Processes | gausspr | kernlab | Kernel parameters have to be passed directly and not by using the `kpar` list in `gausspr`. Note that `fit` has been set to `FALSE` by default for speed. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 8 | classif.gbm | Gradient Boosting Machine | gbm | gbm | `keep.data` is set to FALSE to reduce memory requirements. Note on param 'distribution': gbm will select 'bernoulli' by default for 2 classes, and 'multinomial' for multiclass problems. The latter is the only setting that works for > 2 classes. | classif | TRUE | TRUE | TRUE | FALSE | TRUE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 9 | classif.glmnet | GLM with Lasso or Elasticnet Regularization | glmnet | glmnet | The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise. Factors automatically get converted to dummy columns, ordered factors to integer. Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default, but needs to be tuned by the user. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 10 | classif.h2o.deeplearning | h2o.deeplearning | h2o.dl | h2o | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 11 | classif.h2o.gbm | h2o.gbm | h2o.gbm | h2o | 'distribution' is set automatically to 'gaussian'. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 12 | classif.h2o.glm | h2o.glm | h2o.glm | h2o | 'family' is always set to 'binomial' to get a binary classifier. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 13 | classif.h2o.randomForest | h2o.randomForest | h2o.rf | h2o | classif | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 14 | classif.knn | k-Nearest Neighbor | knn | class | classif | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 15 | classif.ksvm | Support Vector Machines | ksvm | kernlab | Kernel parameters have to be passed directly and not by using the `kpar` list in `ksvm`. Note that `fit` has been set to `FALSE` by default for speed. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE |
| 16 | classif.lda | Linear Discriminant Analysis | lda | MASS | Learner parameter `predict.method` maps to `method` in `predict.lda`. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 17 | classif.logreg | Logistic Regression | logreg | stats | Delegates to `glm` with `family = binomial(link = "logit")`. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 18 | classif.lssvm | Least Squares Support Vector Machine | lssvm | kernlab | `fitted` has been set to `FALSE` by default for speed. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 19 | classif.lvq1 | Learning Vector Quantization | lvq1 | class | classif | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 20 | classif.mlp | Multi-Layer Perceptron | mlp | RSNNS | classif | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 21 | classif.multinom | Multinomial Regression | multinom | nnet | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 22 | classif.naiveBayes | Naive Bayes | nbayes | e1071 | classif | TRUE | TRUE | TRUE | FALSE | TRUE | FALSE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 23 | classif.nnet | Neural Network | nnet | nnet | `size` has been set to `3` by default. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 24 | classif.plsdaCaret | Partial Least Squares (PLS) Discriminant Analysis | plsdacaret | caret | classif | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 25 | classif.probit | Probit Regression | probit | stats | Delegates to `glm` with `family = binomial(link = "probit")`. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 26 | classif.qda | Quadratic Discriminant Analysis | qda | MASS | Learner parameter `predict.method` maps to `method` in `predict.qda`. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 27 | classif.randomForest | Random Forest | rf | randomForest | Note that the rf can freeze the R process if trained on a task with 1 feature which is constant. This can happen in feature forward selection, also due to resampling, and you need to remove such features with removeConstantFeatures. | classif | TRUE | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE |
| 28 | classif.rpart | Decision Tree | rpart | rpart | `xval` has been set to `0` by default for speed. | classif | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 29 | classif.svm | Support Vector Machines (libsvm) | svm | e1071 | classif | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | TRUE | FALSE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | |
| 30 | classif.xgboost | eXtreme Gradient Boosting | xgboost | xgboost | All settings are passed directly, rather than through `xgboost`'s `params` argument. `nrounds` has been set to `1` by default. `num_class` is set internally, so do not set this manually. | classif | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 31 | cluster.dbscan | DBScan Clustering | dbscan | fpc | A cluster index of NA indicates noise points. Specify `method = "dist"` if the data should be interpreted as dissimilarity matrix or object. Otherwise Euclidean distances will be used. | cluster | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 32 | cluster.kkmeans | Kernel K-Means | kkmeans | kernlab | `centers` has been set to `2L` by default. The nearest center in kernel distance determines cluster assignment of new data points. Kernel parameters have to be passed directly and not by using the `kpar` list in `kkmeans` | cluster | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 33 | regr.avNNet | Neural Network | avNNet | nnet | `size` has been set to `3` by default. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 34 | regr.cforest | Random Forest Based on Conditional Inference Trees | cforest | party | See `?ctree_control` for possible breakage for nominal features with missingness. | regr | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 35 | regr.ctree | Conditional Inference Trees | ctree | party | See `?ctree_control` for possible breakage for nominal features with missingness. | regr | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 36 | regr.gausspr | Gaussian Processes | gausspr | kernlab | Kernel parameters have to be passed directly and not by using the `kpar` list in `gausspr`. Note that `fit` has been set to `FALSE` by default for speed. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE |
| 37 | regr.gbm | Gradient Boosting Machine | gbm | gbm | `keep.data` is set to FALSE to reduce memory requirements, `distribution` has been set to `"gaussian"` by default. | regr | TRUE | TRUE | TRUE | FALSE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 38 | regr.glm | Generalized Linear Regression | glm | stats | 'family' must be a character and every family has its own link, i.e. family = 'gaussian', link.gaussian = 'identity', which is also the default. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE |
| 39 | regr.glmnet | GLM with Lasso or Elasticnet Regularization | glmnet | glmnet | Factors automatically get converted to dummy columns, ordered factors to integer. Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default, but needs to be tuned by the user. | regr | TRUE | TRUE | TRUE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 40 | regr.h2o.deeplearning | h2o.deeplearning | h2o.dl | h2o | regr | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 41 | regr.h2o.gbm | h2o.gbm | h2o.gbm | h2o | 'distribution' is set automatically to 'gaussian'. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 42 | regr.h2o.glm | h2o.glm | h2o.glm | h2o | 'family' is always set to 'gaussian'. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 43 | regr.h2o.randomForest | h2o.randomForest | h2o.rf | h2o | regr | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 44 | regr.ksvm | Support Vector Machines | ksvm | kernlab | Kernel parameters have to be passed directly and not by using the `kpar` list in `ksvm`. Note that `fit` has been set to `FALSE` by default for speed. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 45 | regr.lm | Simple Linear Regression | lm | stats | regr | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | |
| 46 | regr.mob | Model-based Recursive Partitioning Yielding a Tree with Fitted Models Associated with each Terminal Node | mob | party | regr | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 47 | regr.nnet | Neural Network | nnet | nnet | `size` has been set to `3` by default. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 48 | regr.randomForest | Random Forest | rf | randomForest | See `?regr.randomForest` for information about se estimation. Note that the rf can freeze the R process if trained on a task with 1 feature which is constant. This can happen in feature forward selection, also due to resampling, and you need to remove such features with removeConstantFeatures. | regr | TRUE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE |
| 49 | regr.rpart | Decision Tree | rpart | rpart | `xval` has been set to `0` by default for speed. | regr | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 50 | regr.rvm | Relevance Vector Machine | rvm | kernlab | Kernel parameters have to be passed directly and not by using the `kpar` list in `rvm`. Note that `fit` has been set to `FALSE` by default for speed. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 51 | regr.svm | Support Vector Machines (libsvm) | svm | e1071 | regr | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | |
| 52 | regr.xgboost | eXtreme Gradient Boosting | xgboost | xgboost | All settings are passed directly, rather than through `xgboost`'s `params` argument. `nrounds` has been set to `1` by default. | regr | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE |
| 53 | surv.cforest | Random Forest based on Conditional Inference Trees | crf | party,survival | See `?ctree_control` for possible breakage for nominal features with missingness. | surv | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE |
| 54 | surv.coxph | Cox Proportional Hazard Model | coxph | survival | surv | TRUE | TRUE | TRUE | FALSE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE | |
| 55 | surv.cvglmnet | GLM with Regularization (Cross Validated Lambda) | cvglmnet | glmnet | Factors automatically get converted to dummy columns, ordered factors to integer. | surv | TRUE | TRUE | TRUE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE |
| 56 | surv.glmnet | GLM with Regularization | glmnet | glmnet | Factors automatically get converted to dummy columns, ordered factors to integer. Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default, but needs to be tuned by the user. | surv | TRUE | TRUE | TRUE | TRUE | FALSE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE |
| 57 | surv.rpart | Survival Tree | rpart | rpart | `xval` has been set to `0` by default for speed. | surv | TRUE | TRUE | TRUE | TRUE | TRUE | TRUE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | FALSE | TRUE | FALSE |
二、ML在python+R的互查
R︱mlr包帮你挑选最适合数据的机器学习模型(分类、回归)+机器学习python和R互查手册的更多相关文章
- <转>机器学习系列(9)_机器学习算法一览(附Python和R代码)
转自http://blog.csdn.net/han_xiaoyang/article/details/51191386 – 谷歌的无人车和机器人得到了很多关注,但我们真正的未来却在于能够使电脑变得更 ...
- 深入对比数据科学工具箱:Python和R之争
建议:如果只是处理(小)数据的,用R.结果更可靠,速度可以接受,上手方便,多有现成的命令.程序可以用.要自己搞个算法.处理大数据.计算量大的,用python.开发效率高,一切尽在掌握. 概述 在真实的 ...
- 【技术翻译】支持向量机简明教程及其在python和R下的调参
原文:Simple Tutorial on SVM and Parameter Tuning in Python and R 介绍 数据在机器学习中是重要的一种任务,支持向量机(SVM)在模式分类和非 ...
- Python与R的争锋:大数据初学者该怎样选?
在当下,人工智能的浪潮席卷而来.从AlphaGo.无人驾驶技术.人脸识别.语音对话,到商城推荐系统,金融业的风控,量化运营.用户洞察.企业征信.智能投顾等,人工智能的应用广泛渗透到各行各业,也让数据科 ...
- (数据科学学习手札29)KNN分类的原理详解&Python与R实现
一.简介 KNN(k-nearst neighbors,KNN)作为机器学习算法中的一种非常基本的算法,也正是因为其原理简单,被广泛应用于电影/音乐推荐等方面,即有些时候我们很难去建立确切的模型来描述 ...
- (数据科学学习手札22)主成分分析法在Python与R中的基本功能实现
上一篇中我们详细介绍推导了主成分分析法的原理,并基于Python通过自编函数实现了挑选主成分的过程,而在Python与R中都有比较成熟的主成分分析函数,本篇我们就对这些方法进行介绍: R 在R的基础函 ...
- (数据科学学习手札23)决策树分类原理详解&Python与R实现
作为机器学习中可解释性非常好的一种算法,决策树(Decision Tree)是在已知各种情况发生概率的基础上,通过构成决策树来求取净现值的期望值大于等于零的概率,评价项目风险,判断其可行性的决策分析方 ...
- 使用R语言的RTCGA包获取TCGA数据--转载
转载生信技能树 https://mp.weixin.qq.com/s/JB_329LCWqo5dY6MLawfEA TCGA数据源 - R包RTCGA的简单介绍 - 首先安装及加载包 - 指定任意基因 ...
- R实战 第八篇:重塑数据(reshape2)
数据重塑通常使用reshape2包,reshape2包用于实现对宽数据及长数据之间的相互转换,由于reshape2包不在R的默认安装包列表中,在第一次使用之前,需要安装和引用: install.pac ...
随机推荐
- sed&awk第二版读书笔记
1. POSIX标准对正则表达式字符和操作符的含义进行了形式化.这种标准定义了两类正则表达式:基本的正则表达式(BRE),grep和sed使用这种正则表达式;扩展的表达式,egrep和awk使用这种正 ...
- 解读TCP 四种定时器
TCP 是提供可靠的传输层,它使用的方法之一就是确认从另一端收到的数据.但是数据和确认都可能会丢失.TCP 通过在发送时设置一个定时器来解决这个问题.如果当定时器溢出时还没收到确认,它就会重传该数据. ...
- python中math模块常用的方法整理
ceil:取大于等于x的最小的整数值,如果x是一个整数,则返回x copysign:把y的正负号加到x前面,可以使用0 cos:求x的余弦,x必须是弧度 degrees:把x从弧度转换成角度 e:表示 ...
- Java8之Lambda表达式基础
Java 8中,将会提供对lambda的支持,函数式编程FP(Functional Programming)将会得到很好地支持,而函数式编程的一个重要特点就是适合并行运算. λ:希腊字母表中排序第十一 ...
- vue1.0中$index一直报错的解决办法
原文链接:https://www.cnblogs.com/liqiong-web/p/8144925.html 看学习视频,因为年份比较早了,其实vue早已迭代到vue2.0了,遇到一些问题: v-f ...
- python实现时间o(1)的最小栈
这是毕业校招二面时遇到的手写编程题,当时刚刚开始学习python,整个栈写下来也是费了不少时间.毕竟语言只是工具,只要想清楚实现,使用任何语言都能快速的写出来. 何为最小栈?栈最基础的操作是压栈(pu ...
- Oracle的order by的中文排序问题
Oracle 中查询结果按照某个中文字段或者英文字母(包括 符号)排序,并不会得到我们预期的结果,因为对于中文与英文字母及符号,Oracle实际是按照其对应的ASCII码值排序的! 可以看到按照中文村 ...
- django-rest-framework之基于类的视图
前言:上一篇博客中,主要讲的是请求和响应,项目里面views.py中的视图函数都是基于函数的,并且我们介绍了@api_view这个很有用的装饰器.同时,我们还介绍了APIView这个类,但是还没使用它 ...
- 张高兴的 Windows 10 IoT 开发笔记:部署 ASP.NET Core 2 应用
今天是大年初二,都去走亲戚了吧,享受一下这难得的能和亲友相聚的时光.而我就不一样了,今天一回到家就又开始瞎折腾了,哈哈哈. 问题背景 最近花了点时间用 ASP.NET Core 2 写了个个人博客,中 ...
- python学习:匿名函数
Python 函数 lambda 匿名函数 -lambda 函数是一种快速定义单行的最小函数,可以用在任何需要函数的地方. def fun(x,y): return x*y fu ...