Spark将DataFrame进行一些列处理后,需要将之写入mysql,下面是实现过程

1.mysql的信息

mysql的信息我保存在了外部的配置文件,这样方便后续的配置添加。

 //配置文件示例:
[hdfs@iptve2e03 tmp_lillcol]$ cat job.properties
#mysql数据库配置
mysql.driver=com.mysql.jdbc.Driver
mysql.url=jdbc:mysql://127.0.0.1:3306/database1?useSSL=false&autoReconnect=true&failOverReadOnly=false&rewriteBatchedStatements=true
mysql.username=user
mysql.password=123456

2.需要的jar依赖(sbt版本,maven的对应修改即可)

 libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.6.0-cdh5.7.2"
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.6.0-cdh5.7.2"
libraryDependencies += "org.apache.spark" % "spark-hive_2.10" % "1.6.0-cdh5.7.2"
libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.2.0-cdh5.7.2"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.2.0-cdh5.7.2"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.2.0-cdh5.7.2"
libraryDependencies += "org.apache.hbase" % "hbase-protocol" % "1.2.0-cdh5.7.2"
libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.38"
libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "1.6.0-cdh5.7.2"
libraryDependencies += "com.yammer.metrics" % "metrics-core" % "2.2.0"

3.完整实现代码

 import java.io.FileInputStream
import java.sql.{Connection, DriverManager}
import java.util.Properties import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.{DataFrame, SQLContext, SaveMode}
import org.apache.spark.{SparkConf, SparkContext} /**
* @author Administrator
* 2018/10/16-10:15
*
*/
object SaveDataFrameASMysql {
var hdfsPath: String = ""
var proPath: String = ""
var DATE: String = "" val sparkConf: SparkConf = new SparkConf().setAppName(getClass.getSimpleName)
val sc: SparkContext = new SparkContext(sparkConf)
val sqlContext: SQLContext = new HiveContext(sc) def main(args: Array[String]): Unit = {
hdfsPath = args(0)
proPath = args(1)
//不过滤读取
val dim_sys_city_dict: DataFrame = readMysqlTable(sqlContext, "TestMysqlTble1", proPath)
dim_sys_city_dict.show(10) //保存mysql
saveASMysqlTable(dim_sys_city_dict, "TestMysqlTble2", SaveMode.Append, proPath)
} /**
* 将DataFrame保存为Mysql表
*
* @param dataFrame 需要保存的dataFrame
* @param tableName 保存的mysql 表名
* @param saveMode 保存的模式 :Append、Overwrite、ErrorIfExists、Ignore
* @param proPath 配置文件的路径
*/
def saveASMysqlTable(dataFrame: DataFrame, tableName: String, saveMode: SaveMode, proPath: String) = {
var table = tableName
val properties: Properties = getProPerties(proPath)
val prop = new Properties //配置文件中的key 与 spark 中的 key 不同 所以 创建prop 按照spark 的格式 进行配置数据库
prop.setProperty("user", properties.getProperty("mysql.username"))
prop.setProperty("password", properties.getProperty("mysql.password"))
prop.setProperty("driver", properties.getProperty("mysql.driver"))
prop.setProperty("url", properties.getProperty("mysql.url"))
if (saveMode == SaveMode.Overwrite) {
var conn: Connection = null
try {
conn = DriverManager.getConnection(
prop.getProperty("url"),
prop.getProperty("user"),
prop.getProperty("password")
)
val stmt = conn.createStatement
table = table.toUpperCase
stmt.execute(s"truncate table $table") //为了不删除表结构,先truncate 再Append
conn.close()
}
catch {
case e: Exception =>
println("MySQL Error:")
e.printStackTrace()
}
}
dataFrame.write.mode(SaveMode.Append).jdbc(prop.getProperty("url"), table, prop)
} /**
* 获取 Mysql 表的数据
*
* @param sqlContext
* @param tableName 读取Mysql表的名字
* @param proPath 配置文件的路径
* @return 返回 Mysql 表的 DataFrame
*/
def readMysqlTable(sqlContext: SQLContext, tableName: String, proPath: String) = {
val properties: Properties = getProPerties(proPath)
sqlContext
.read
.format("jdbc")
.option("url", properties.getProperty("mysql.url"))
.option("driver", properties.getProperty("mysql.driver"))
.option("user", properties.getProperty("mysql.username"))
.option("password", properties.getProperty("mysql.password"))
// .option("dbtable", tableName.toUpperCase)
.option("dbtable", tableName)
.load() } /**
* 获取 Mysql 表的数据 添加过滤条件
*
* @param sqlContext
* @param table 读取Mysql表的名字
* @param filterCondition 过滤条件
* @param proPath 配置文件的路径
* @return 返回 Mysql 表的 DataFrame
*/
def readMysqlTable(sqlContext: SQLContext, table: String, filterCondition: String, proPath: String) = {
val properties: Properties = getProPerties(proPath)
var tableName = ""
tableName = "(select * from " + table + " where " + filterCondition + " ) as t1"
sqlContext
.read
.format("jdbc")
.option("url", properties.getProperty("mysql.url"))
.option("driver", properties.getProperty("mysql.driver"))
.option("user", properties.getProperty("mysql.username"))
.option("password", properties.getProperty("mysql.password"))
.option("dbtable", tableName)
.load()
} /**
* 获取配置文件
*
* @param proPath
* @return
*/
def getProPerties(proPath: String) = {
val properties: Properties = new Properties()
properties.load(new FileInputStream(proPath))
properties
}
}

4.测试

 def main(args: Array[String]): Unit = {
hdfsPath = args(0)
proPath = args(1)
//不过滤读取
val dim_sys_city_dict: DataFrame = readMysqlTable(sqlContext, "TestMysqlTble1", proPath)
dim_sys_city_dict.show(10) //保存mysql
saveASMysqlTable(dim_sys_city_dict, "TestMysqlTble2", SaveMode.Append, proPath)
}

5.运行结果数据敏感进行过处理

 +-------+-------+---------+---------+--------+----------+---------+--------------------+----+-----------+
|dict_id|city_id|city_name|city_code|group_id|group_name|area_code| bureau_id|sort|bureau_name|
+-------+-------+---------+---------+--------+----------+---------+--------------------+----+-----------+
| 1| 249| **| **_ab| 100| **按时| **-查到|xcaasd...| 21| 张三公司|
| 2| 240| **| **_ab| 300| **按时| **-查到|xcaasd...| 21| 张三公司|
| 3| 240| **| **_ab| 100| **按时| **-查到|xcaasd...| 21| 张三公司|
| 4| 242| **| **_ab| 300| **按时| **-查到|xcaasd...| 01| 张三公司|
| 5| 246| **| **_ab| 100| **按时| **-查到|xcaasd...| 01| 张三公司|
| 6| 246| **| **_ab| 300| **按时| **-查到|xcaasd...| 01| 张三公司|
| 7| 248| **| **_ab| 200| **按时| **-查到|xcaasd...| 01| 张三公司|
| 8| 242| **| **_ab| 400| **按时| **-查到|xcaasd...| 01| 张三公司|
| 9| 247| **| **_ab| 200| **按时| **-查到|xcaasd...| 01| 张三公司|
| 0| 243| **| **_ab| 400| **按时| **-查到|xcaasd...| 01| 张三公司|
+-------+-------+---------+---------+--------+----------+---------+--------------------+----+-----------+ mysql> desc TestMysqlTble1;
+-------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-------------+------+-----+---------+-------+
| dict_id | varchar(32) | YES | | NULL | |
| city_id | varchar(32) | YES | | NULL | |
| city_name | varchar(32) | YES | | NULL | |
| city_code | varchar(32) | YES | | NULL | |
| group_id | varchar(32) | YES | | NULL | |
| group_name | varchar(32) | YES | | NULL | |
| area_code | varchar(32) | YES | | NULL | |
| bureau_id | varchar(64) | YES | | NULL | |
| sort | varchar(32) | YES | | NULL | |
| bureau_name | varchar(32) | YES | | NULL | |
+-------------+-------------+------+-----+---------+-------+
10 rows in set (0.00 sec) mysql> desc TestMysqlTble2;
+-------------+------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------+------+-----+---------+-------+
| dict_id | text | YES | | NULL | |
| city_id | text | YES | | NULL | |
| city_name | text | YES | | NULL | |
| city_code | text | YES | | NULL | |
| group_id | text | YES | | NULL | |
| group_name | text | YES | | NULL | |
| area_code | text | YES | | NULL | |
| bureau_id | text | YES | | NULL | |
| sort | text | YES | | NULL | |
| bureau_name | text | YES | | NULL | |
+-------------+------+------+-----+---------+-------+
10 rows in set (0.00 sec) mysql> select count(1) from TestMysqlTble1;
+----------+
| count(1) |
+----------+
| 21 |
+----------+
1 row in set (0.00 sec) mysql> select count(1) from TestMysqlTble2;
+----------+
| count(1) |
+----------+
| 21 |
+----------+
1 row in set (0.00 sec)

6.效率问题

一开始直接这么用的时候小数据还没什么,但是数据量大一点的时候速度就不行了,于是想方设法的想优化一下,用了几个手段效果不明显,然后进去看源代码,发现了两个关键的片段

  /**
* Saves the content of the [[DataFrame]] to a external database table via JDBC. In the case the
* table already exists in the external database, behavior of this function depends on the
* save mode, specified by the `mode` function (default to throwing an exception).
*
* Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash
* your external database systems.
*
* @param url JDBC database url of the form `jdbc:subprotocol:subname`
* @param table Name of the table in the external database.
* @param connectionProperties JDBC database connection arguments, a list of arbitrary string
* tag/value. Normally at least a "user" and "password" property
* should be included.
*
* @since 1.4.0
*/
def jdbc(url: String, table: String, connectionProperties: Properties): Unit = {
val props = new Properties()
extraOptions.foreach { case (key, value) =>
props.put(key, value)
}
// connectionProperties should override settings in extraOptions
props.putAll(connectionProperties)
val conn = JdbcUtils.createConnectionFactory(url, props)() try {
var tableExists = JdbcUtils.tableExists(conn, url, table) if (mode == SaveMode.Ignore && tableExists) {
return
} if (mode == SaveMode.ErrorIfExists && tableExists) {
sys.error(s"Table $table already exists.")
} if (mode == SaveMode.Overwrite && tableExists) {
JdbcUtils.dropTable(conn, table)
tableExists = false
} // Create the table if the table didn't exist.
if (!tableExists) {
val schema = JdbcUtils.schemaString(df, url)
val sql = s"CREATE TABLE $table ($schema)"
val statement = conn.createStatement
try {
statement.executeUpdate(sql)
} finally {
statement.close()
}
}
} finally {
conn.close()
} JdbcUtils.saveTable(df, url, table, props)//-----------------------------关键点1
} /**
* Saves the RDD to the database in a single transaction.
*/
def saveTable(
df: DataFrame,
url: String,
table: String,
properties: Properties) {
val dialect = JdbcDialects.get(url)
val nullTypes: Array[Int] = df.schema.fields.map { field =>
getJdbcType(field.dataType, dialect).jdbcNullType
} val rddSchema = df.schema
val getConnection: () => Connection = createConnectionFactory(url, properties)
val batchSize = properties.getProperty("batchsize", "1000").toInt
df.foreachPartition { iterator => //------------------------------------关键点2
savePartition(getConnection, table, iterator, rddSchema, nullTypes, batchSize, dialect)
}
}

也就是说,自带的方法就是按照分区来存的,每一个分区开启一个mysql连接,所以最简单的优化方式就是在保存之前对DataFrame进行重新分区,注意数据倾斜问题,不然可能效率没有提升。
当然目前测试过最快的就是文件拿下来直接通过load data的命令导入mysql,但是这个比较麻烦。

下面是分区示例

 def main(args: Array[String]): Unit = {
hdfsPath = args(0)
proPath = args(1)
//不过滤读取
val dim_sys_city_dict: DataFrame = readMysqlTable(sqlContext, "TestMysqlTble1", proPath)
dim_sys_city_dict.show(10) //保存mysql
saveASMysqlTable(dim_sys_city_dict.repartition(10), "TestMysqlTble2", SaveMode.Append, proPath)
}

7.总结

将DataFrame写入mysql有几点需要注意的地方:

  • 需要保存的表最好事先建好,否则字段类型会使用默认的,Text类型实在是耗资源,对比前后两张表,下面分别为源表TestMysqlTble1和DataFrame保存的mysql表TestMysqlTble2
 mysql> desc TestMysqlTble1;
+-------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-------------+------+-----+---------+-------+
| dict_id | varchar(32) | YES | | NULL | |
| city_id | varchar(32) | YES | | NULL | |
| city_name | varchar(32) | YES | | NULL | |
| city_code | varchar(32) | YES | | NULL | |
| group_id | varchar(32) | YES | | NULL | |
| group_name | varchar(32) | YES | | NULL | |
| area_code | varchar(32) | YES | | NULL | |
| bureau_id | varchar(64) | YES | | NULL | |
| sort | varchar(32) | YES | | NULL | |
| bureau_name | varchar(32) | YES | | NULL | |
+-------------+-------------+------+-----+---------+-------+
10 rows in set (0.00 sec) mysql> desc TestMysqlTble2;
+-------------+------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------+------+-----+---------+-------+
| dict_id | text | YES | | NULL | |
| city_id | text | YES | | NULL | |
| city_name | text | YES | | NULL | |
| city_code | text | YES | | NULL | |
| group_id | text | YES | | NULL | |
| group_name | text | YES | | NULL | |
| area_code | text | YES | | NULL | |
| bureau_id | text | YES | | NULL | |
| sort | text | YES | | NULL | |
| bureau_name | text | YES | | NULL | |
+-------------+------+------+-----+---------+-------+
10 rows in set (0.00 sec)
  • 关于 SaveMode.Overwrite
 def jdbc(url: String, table: String, connectionProperties: Properties): Unit = {
val props = new Properties()
extraOptions.foreach { case (key, value) =>
props.put(key, value)
}
// connectionProperties should override settings in extraOptions
props.putAll(connectionProperties)
val conn = JdbcUtils.createConnectionFactory(url, props)() try {
var tableExists = JdbcUtils.tableExists(conn, url, table) if (mode == SaveMode.Ignore && tableExists) {
return
} if (mode == SaveMode.ErrorIfExists && tableExists) {
sys.error(s"Table $table already exists.")
} if (mode == SaveMode.Overwrite && tableExists) {
JdbcUtils.dropTable(conn, table)//----------------------------------------关键点1
tableExists = false
} // Create the table if the table didn't exist.
if (!tableExists) {
val schema = JdbcUtils.schemaString(df, url)
val sql = s"CREATE TABLE $table ($schema)"
val statement = conn.createStatement
try {
statement.executeUpdate(sql)
} finally {
statement.close()
}
}
} finally {
conn.close()
} JdbcUtils.saveTable(df, url, table, props)
} /**
* Drops a table from the JDBC database.
*/
def dropTable(conn: Connection, table: String): Unit = {
val statement = conn.createStatement
try {
statement.executeUpdate(s"DROP TABLE $table")//-------------------------------------关键点2
} finally {
statement.close()
}
}

从上述两段关键代码可以看到,在写入的时候会先判断表存不存在,SaveMode.Overwrite 的时候会执行 dropTable(conn: Connection, table: String)把原来的表删除掉,这也意味着你会失去你的表结构,新建的表会出现上一个问题都用默认类型,所以在保存的方法中我添加了下面的操作

 if (saveMode == SaveMode.Overwrite) {
51 var conn: Connection = null
52 try {
53 conn = DriverManager.getConnection(
54 prop.getProperty("url"),
55 prop.getProperty("user"),
56 prop.getProperty("password")
57 )
58 val stmt = conn.createStatement
59 table = table.toUpperCase
60 stmt.execute(s"truncate table $table") //为了不删除表结构,先truncate 再Append
61 conn.close()
62 }
63 catch {
64 case e: Exception =>
65 println("MySQL Error:")
66 e.printStackTrace()
67 }
truncate仅仅是删除数据,并不删除结构。

如果表一开始不存在

如果一开始不存在需要分两种情况:

1.非SaveMode.Overwrite模式

没有问题,会直接建表,用默认的数据类型

2.SaveMode.Overwrite模式

会报错,下面是在没有TestMysqlTble2的情况下使用SaveMode.Overwrite

 com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'iptv.TESTMYSQLTBLE2' doesn't exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.Util.getInstance(Util.java:387)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:939)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3878)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3814)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2478)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2625)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2547)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2505)
at com.mysql.jdbc.StatementImpl.executeInternal(StatementImpl.java:840)
at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:740)
at com.iptv.job.basedata.SaveDataFrameASMysql$.saveASMysqlTable(SaveDataFrameASMysql.scala:62)
at com.iptv.job.basedata.SaveDataFrameASMysql$.main(SaveDataFrameASMysql.scala:33)
at com.iptv.job.basedata.SaveDataFrameASMysql.main(SaveDataFrameASMysql.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

报错详情

 at com.iptv.job.basedata.SaveDataFrameASMysql$.saveASMysqlTable(SaveDataFrameASMysql.scala:62)
生面报错位置对应的代码为
stmt.execute(s"truncate table $table") //为了不删除表结构,先truncate 再Append
即truncate需要表存在

至此,DataFrame写mysql功能实现

文章为个人工作总结,转载请注明出处!!!!!!!

Spark:将DataFrame写入Mysql的更多相关文章

  1. SparkSQL之dataframe写入mysql报错

    一.异常情况及解决方案 在使用Spark SQL的dataframe数据写入到相应的MySQL表中时,报错,错误信息如下: 代码的基本形式为: df.write.jdbc(url, result_ta ...

  2. spark 计算结果写入mysql 案例及常见问题解决

    package com.jxd import org.apache.spark.SparkContextimport org.apache.spark.SparkConfimport java.sql ...

  3. spark读文件写入mysql(scala版本)

    package com.zjlantone.hive import java.util.Properties import com.zjlantone.hive.SparkOperaterHive.s ...

  4. 通过pd.to_sql()将DataFrame写入Mysql

    循环创建表,并且创建主键.外键 import pandas as pd from sqlalchemy import create_engine from sqlalchemy.types impor ...

  5. Spark操作MySQL,Hive并写入MySQL数据库

    最近一个项目,需要操作近70亿数据进行统计分析.如果存入MySQL,很难读取如此大的数据,即使使用搜索引擎,也是非常慢.经过调研决定借助我们公司大数据平台结合Spark技术完成这么大数据量的统计分析. ...

  6. Spark操作dataFrame进行写入mysql,自定义sql的方式

    业务场景: 现在项目中需要通过对spark对原始数据进行计算,然后将计算结果写入到mysql中,但是在写入的时候有个限制: 1.mysql中的目标表事先已经存在,并且当中存在主键,自增长的键id 2. ...

  7. Spark:DataFrame 写入文本文件

    将DataFrame写成文件方法有很多最简单的将DataFrame转换成RDD,通过saveASTextFile进行保存但是这个方法存在一些局限性:1.将DataFrame转换成RDD或导致数据结构的 ...

  8. spark基础知识介绍(包含foreachPartition写入mysql)

    数据本地性 数据计算尽可能在数据所在的节点上运行,这样可以减少数据在网络上的传输,毕竟移动计算比移动数据代价小很多.进一步看,数据如果在运行节点的内存中,就能够进一步减少磁盘的I/O的传输.在spar ...

  9. Spark DataFrame写入HBase的常用方式

    Spark是目前最流行的分布式计算框架,而HBase则是在HDFS之上的列式分布式存储引擎,基于Spark做离线或者实时计算,数据结果保存在HBase中是目前很流行的做法.例如用户画像.单品画像.推荐 ...

随机推荐

  1. 记录一次配置golang服务器端口

    之前配置程序监听端口,地址都写成IP+:Port的格式,然而一直调试不同,也找不出问题. 后来,参考博客https://blog.csdn.net/yoie01/article/details/214 ...

  2. Java oop(一些自己的理解,并没有展开很细)

    一下内容是自己总结用的,只是按照自己的理解去写.参考的是菜鸟教程.Java 是一个面向对象的语言.OOP就是面向对象编程.封装:在某些类里面,某些属性不想向外暴露,但是我们又想提供一个方法去访问或修改 ...

  3. GC 是什么? 为什么要有GC?

    C/C++中由程序员进行对象的回收像学校食堂中由学生收盘子,.Net 中由GC 进行垃圾回收像餐馆中店员 去回收. GC 是垃圾收集器(Garbage Collection).程序员不用担心内存管理, ...

  4. X-template

    <body> <div id="app"> <hello-world></hello-world> </div> < ...

  5. Python第一章(北理国家精品课 嵩天等)

    1.1程序设计基本方法 IPO 分析问题,划分边界,设计算法: 编写程序,调试测试,升级维护. 1.2Python开发环境配置 1.3实例1:温度转换 1.4Python程序语法元素分析 缩进,#添加 ...

  6. python基础16_闭包_装饰器

    不了解是否其他语言也有类似 python 装饰器这样的东西. 最近才发现ECMAScript6也是有生成器函数的,也有 yield  generator 装饰器的基础知识是闭包: # 闭包:嵌套函数, ...

  7. datetime模块

    # 其中days = -2,可以根据需要进行替换,这样就可以得到不同需要的日期了. # # 另外:可以通过strftime方法,指定时间的输出格式. # # 除了以上输入的   %Y-%m-%d    ...

  8. dom树渲染对性能的影响

    这样写会访问两次dom节点树,一次读取innerHTML,一次重写innerHTML. 当然,加载速度也是很惊人的. 用一个变量把a存起来,只读取和重写innerHTML一次. 可以看到加载时间大幅度 ...

  9. wav文件系列_1_wav格式解读

    本文介绍 wav 文件格式,主要关注该类格式的结构. 参考: [1] 以一个wav文件为实例分析wav文件格式 ( 2017.04.11 CSDN ) [2] WAV ( Wikipedia ) [3 ...

  10. 机器学习基础环境的安装与使用(MAC版)

    使用到Matplotlib.Numpy.Pandans等库 1.创建虚拟环境 >>>> sudo pip3 install virtualenv >>>> ...