python机器学习经典算法代码示例及思维导图(数学建模必备)
最近几天学习了机器学习经典算法,通过此次学习入门了机器学习,并将经典算法的代码实现并记录下来,方便后续查找与使用。
这次记录主要分为两部分:第一部分是机器学习思维导图,以框架的形式描述机器学习开发流程,并附有相关的具体python库,做索引使用;第二部分是相关算法的代码实现(其实就是调包),方便后面使用时直接复制粘贴,改改就可以用,尤其是在数学建模中很实用。
第一部分,思维导图:

第二部分,代码示例:
机器学习代码示例
导包
import numpy as np
import pandas as pd
from matplotlib.pyplot import plot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import VarianceThreshold
from scipy.stats import pearsonr
from sklearn.model_selection import GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LinearRegression, SGDRegressor, Ridge, LogisticRegression
from sklearn.metrics import mean_squared_error
from sklearn.metrics import classification_report
from sklearn.metrics import roc_auc_score
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
import joblib
特征工程
特征抽取
def dict_demo():
data = [{'city': '北京', 'temperature': 100}, {'city': '上海', 'temperature': 200},
{'city': '广州', 'temperature': 300}]
transfer = DictVectorizer()
data_new = transfer.fit_transform(data)
data_new = data_new.toarray()
print(data_new)
print(transfer.get_feature_names_out())
# dict_demo()
def count_demo():
data = ["I love love China", "I don't love China"]
transfer = CountVectorizer()
data_new = transfer.fit_transform(data)
data_new = data_new.toarray()
print(data_new)
print(transfer.get_feature_names_out())
# count_demo()
def chinese_demo(d):
tt = " ".join(list(jieba.cut(d)))
return tt
# data = [
# "晚风轻轻飘荡,心事都不去想,那失望也不失望,惆怅也不惆怅,都在风中飞扬",
# "晚风轻轻飘荡,随我迎波逐浪,那欢畅都更欢畅,幻想更幻想,就像 你还在身旁"]
# res = []
# for t in data:
# res.append(chinese_demo(t))
#
# transfer = TfidfVectorizer()
# new_data = transfer.fit_transform(res)
# new_data = new_data.toarray()
# print(new_data)
# print(transfer.get_feature_names_out())
数据预处理
def minmax_demo():
data = pd.read_csv("datasets/dating.txt")
data = data.iloc[:, 0:3]
print(data)
transfer = MinMaxScaler()
data_new = transfer.fit_transform(data)
print(data_new)
return None
# minmax_demo()
def standard_demo():
data = pd.read_csv("datasets/dating.txt")
data = data.iloc[:, 0:3]
print(data)
transfer = StandardScaler()
data_new = transfer.fit_transform(data)
print(data_new)
return None
# standard_demo()
def stats_demo():
data = pd.read_csv("./datasets/factor_returns.csv")
data = data.iloc[:, 1:10]
transfer = VarianceThreshold(threshold=10)
data_new = transfer.fit_transform(data)
print(data_new)
print(data_new.shape)
df = pd.DataFrame(data_new, columns=transfer.get_feature_names_out())
print(df)
# stats_demo()
def pear_demo():
data = pd.read_csv("./datasets/factor_returns.csv")
data = data.iloc[:, 1:10]
print(data.corr(method="pearson"))
# pear_demo()
模型训练
分类算法
KNN
# 读取数据
iris = load_iris()
# 数据集划分
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.3, random_state=42)
# 数据标准化
transfer = StandardScaler()
transfer.fit(x_train)
x_train = transfer.transform(x_train)
x_test = transfer.transform(x_test)
# 模型训练
estimator = KNeighborsClassifier(n_neighbors=i)
estimator.fit(x_train, y_train)
# 模型预测
y_predict = estimator.predict(x_test)
score = estimator.score(x_test, y_test)
print("score:", score)
朴素贝叶斯
new = fetch_20newsgroups(subset="all")
x_train, x_test, y_train, y_test = train_test_split(new.data, new.target, random_state=42)
# 文本特征提取
transfer = TfidfVectorizer()
transfer.fit(x_train)
x_train = transfer.transform(x_train)
x_test = transfer.transform(x_test)
estimator = MultinomialNB()
estimator.fit(x_train, y_train)
score = estimator.score(x_test, y_test)
print(score)
决策树
iris = load_iris()
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.3, random_state=42)
estimator = DecisionTreeClassifier(criterion='gini')
estimator.fit(x_train, y_train)
score = estimator.score(x_test, y_test)
print(score)
# 决策树可视化
export_graphviz(estimator, out_file='tree.dot', feature_names=iris.feature_names)
随机森林
x_train, x_test, y_train, y_test = train_test_split(x, y, train_size=0.7)
estimator = RandomForestClassifier(random_state=42, max_features='sqrt')
param_dict = {'n_estimators': range(10, 50), 'max_depth': range(5, 10)}
estimator = GridSearchCV(estimator=estimator, param_grid=param_dict, cv=3)
estimator.fit(x_train, y_train)
print(estimator.best_score_)
print(estimator.best_estimator_)
print(estimator.best_params_)
回归算法
线性回归
def demo1():
data_url = "http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
target = raw_df.values[1::2, 2]
x_train, x_test, y_train, y_test = train_test_split(data, target, train_size=0.7, random_state=42)
transfer = StandardScaler()
transfer.fit(x_train)
x_train = transfer.transform(x_train)
x_test = transfer.transform(x_test)
estimator = LinearRegression()
estimator.fit(x_train, y_train)
y_predict = estimator.predict(x_test)
mse = mean_squared_error(y_test, y_predict)
print("正规方程-", estimator.coef_)
print("正规方程-", estimator.intercept_)
print(mse)
def demo2():
data_url = "http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
target = raw_df.values[1::2, 2]
x_train, x_test, y_train, y_test = train_test_split(data, target, train_size=0.7, random_state=42)
transfer = StandardScaler()
transfer.fit(x_train)
x_train = transfer.transform(x_train)
x_test = transfer.transform(x_test)
estimator = SGDRegressor()
estimator.fit(x_train, y_train)
y_predict = estimator.predict(x_test)
mse = mean_squared_error(y_test, y_predict)
print("梯度下降", estimator.coef_)
print("梯度下降", estimator.intercept_)
print(mse)
岭回归
def demo3():
data_url = "http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
target = raw_df.values[1::2, 2]
x_train, x_test, y_train, y_test = train_test_split(data, target, train_size=0.7, random_state=42)
transfer = StandardScaler()
transfer.fit(x_train)
x_train = transfer.transform(x_train)
x_test = transfer.transform(x_test)
estimator = Ridge()
estimator.fit(x_train, y_train)
y_predict = estimator.predict(x_test)
mse = mean_squared_error(y_test, y_predict)
print("梯度下降", estimator.coef_)
print("梯度下降", estimator.intercept_)
print(mse)
逻辑回归
def demo4():
data = pd.read_csv("./datasets/breast-cancer-wisconsin.data",
names=['Sample code number', 'Clump Thickness', 'Uniformity of Cell Size',
'Uniformity of Cell Shape',
'Marginal Adhesion', 'Single Epithelial Cell Size', 'Bare Nuclei', 'Bland Chromatin',
' Normal Nucleoli', 'Mitoses', 'Class'])
data.replace(to_replace="?", value=np.nan, inplace=True)
data.dropna(inplace=True)
x = data.iloc[:, 1:-1]
y = data['Class']
x_train, x_test, y_train, y_test = train_test_split(x, y, train_size=0.7, random_state=42)
transfer = StandardScaler()
transfer.fit(x_train)
x_train = transfer.transform(x_train)
x_test = transfer.transform(x_test)
estimator = LogisticRegression()
estimator.fit(x_train, y_train)
# joblib.dump(estimator, 'estimator.pkl')
# estimator = joblib.load('estimator.pkl')
y_predict = estimator.predict(x_test)
print(estimator.coef_)
print(estimator.intercept_)
score = estimator.score(x_test, y_test)
print(score)
report = classification_report(y_test, y_predict, labels=[2, 4], target_names=["良性", "恶性"])
print(report)
auc = roc_auc_score(y_test, y_predict)
print(auc)
聚类算法
KMeans
data = pd.read_csv("./datasets/factor_returns.csv")
data = data.iloc[:, 1:10]
transfer = VarianceThreshold(threshold=10)
data_new = transfer.fit_transform(data)
# df = pd.DataFrame(data_new, columns=transfer.get_feature_names_out())
estimator = KMeans()
estimator.fit(data_new)
y_predict = estimator.predict(data_new)
print(y_predict)
s = silhouette_score(data_new, y_predict)
print(s)
模型调优
# 网格搜索与交叉验证:以KNN为例
iris = load_iris()
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.3, random_state=42)
transfer = StandardScaler()
transfer.fit(x_train)
x_train = transfer.transform(x_train)
x_test = transfer.transform(x_test)
estimator = KNeighborsClassifier()
# 网格搜素设置
para_dict = {"n_neighbors": range(1, 10)}
estimator = GridSearchCV(estimator, para_dict, cv=10)
estimator.fit(x_train, y_train)
# 最佳参数
print("best_score_:", estimator.best_score_)
print("best_estimator_:", estimator.best_estimator_)
print("best_params_:", estimator.best_params_)
本文作者:CodingOrange
本文链接:https://www.cnblogs.com/CodingOrange/p/17642747.html
转载请注明出处!
python机器学习经典算法代码示例及思维导图(数学建模必备)的更多相关文章
- 计算机基础 python安装时的常见致命错误 pycharm 思维导图
计算机基础 1.组成 人 功能 主板:骨架 设备扩展 cpu:大脑 计算 逻辑处理 硬盘: 永久储存 电源:心脏 内存: 临时储存,断电无 操作系统(windonws mac linux): 软件,应 ...
- 第一行代码笔记的思维导图(http://images2015.cnblogs.com/blog/1089110/201704/1089110-20170413160323298-819915364.png)
- python中的内置函数的思维导图
https://mubu.com/doc/taq9-TBNix
- iOS面试准备之思维导图
以思维导图的方式对iOS常见的面试题知识点进行梳理复习,文章xmind点这下载,文章图片太大查看不了也点这下载 你可以在公众号 五分钟学算法 获取数据结构与算法相关的内容,准备算法面试 公众号回复 g ...
- JavaScript如何生成思维导图(mindmap)
JavaScript如何生成思维导图(mindmap) 一.总结 一句话总结:可以直接用gojs gojs 二.一个用JavaScript生成思维导图(mindmap)的github repo(转) ...
- iOS面试准备之思维导图(转)
以思维导图的方式对iOS常见的面试题知识点进行梳理复习. 目录 1.UI视图相关面试问题 2.Runtime相关面试问题 3.内存管理相关面试问题 4.Block相关面试问题 5.多线程相关面试问题 ...
- 一个用JavaScript生成思维导图(mindmap)的github repo
github 地址:https://github.com/dundalek/markmap 作者的readme写得很简单. 今天有同事问作者提供的例子到底怎么跑.这里我就写一个更详细的步骤出来. 首先 ...
- 机器学习经典算法详解及Python实现--基于SMO的SVM分类器
原文:http://blog.csdn.net/suipingsp/article/details/41645779 支持向量机基本上是最好的有监督学习算法,因其英文名为support vector ...
- 机器学习经典算法具体解释及Python实现--线性回归(Linear Regression)算法
(一)认识回归 回归是统计学中最有力的工具之中的一个. 机器学习监督学习算法分为分类算法和回归算法两种,事实上就是依据类别标签分布类型为离散型.连续性而定义的. 顾名思义.分类算法用于离散型分布预測, ...
- 机器学习经典算法具体解释及Python实现--K近邻(KNN)算法
(一)KNN依旧是一种监督学习算法 KNN(K Nearest Neighbors,K近邻 )算法是机器学习全部算法中理论最简单.最好理解的.KNN是一种基于实例的学习,通过计算新数据与训练数据特征值 ...
随机推荐
- WPF入门教程系列二十四——DataGrid使用示例(1)
WPF入门教程系列二--Application介绍 WPF入门教程系列三--Application介绍(续) WPF入门教程系列四--Dispatcher介绍 WPF入门教程系列五--Window 介 ...
- 简要介绍django框架
Django是一个高级的Python Web框架,它鼓励快速开发和干净.实用的设计. Django遵循MVC(模型-视图-控制器)设计模式,使得开发者能够更轻松地组织代码和实现功能.以下是Django ...
- idea过期解决
用作用作发现过期了,苦恼,好办直接 搞个code 就行 MNQ043JMTU-eyJsaWNlbnNlSWQiOiJNTlEwNDNKTVRVIiwibGljZW5zZWVOYW1lIjoiR1VPI ...
- svn is already locked 最终解决方案
今日执行项目更新时,手贱点击了cancel 中断了操作,最后导致项目被锁,杯具了. 首先想到了Clean up 直接提示 看来不行呀 -// 省略 n 多种尝试 最后使用删除db 中的 lock 表来 ...
- < Python全景系列-7 > 提升Python编程效率:模块与包全面解读
欢迎来到我们的系列博客<Python全景系列>!在这个系列中,我们将带领你从Python的基础知识开始,一步步深入到高级话题,帮助你掌握这门强大而灵活的编程语法.无论你是编程新手,还是有一 ...
- 【编程日记】搭建python开发环境
0.相关确定 0.1确定操作系统 Python是一种跨平台的编程语言,这意味着它能够运行在所有主要的操作系统中.然而,在不同的操作系统(Windows/Mac/Linux)中,安装Python的方法存 ...
- 如何洞察 C# 程序的 GDI 句柄泄露
一:背景 1. 讲故事 前段时间有位朋友找到我,说他的程序界面操作起来很慢并且卡顿等一些不正常现象,从任务管理器看了下 GDI句柄 已经到 1w 了,一时也找不出什么代码中哪里有问题,让我帮忙看下,其 ...
- ChatGPT之问艺道:如何借助神级算法Prompt,让你轻松get到更高质量答案?
摘要:本文由葡萄城技术团队编写,文章的内容借鉴于Ibrahim John的<The Art of Asking ChatGPT>(向ChatGPT提问的艺术). 前言 当今,ChatGPT ...
- 全球唯一云厂商!华为云高分入选2023Gartner Peer Insights™云数据库管理系统“客户之选”
本文分享自华为云社区<华为云高分入选2023Gartner Peer Insights云数据库管理系统"客户之选">,作者:GaussDB 数据库 . 近日,Gartn ...
- 国标GB28181协议客户端开发(四)实时视频数据传输
国标GB28181协议客户端开发(四)实时视频数据传输 本文是<国标GB28181协议设备端开发>系列的第四篇,介绍了实时视频数据传输的过程.通过解读INVITE报文中的SDP信息,读取和 ...