[Machine Learning with Python] How to get your data?
Using Pandas Library
The simplest way is to read data from .csv files and store it as a data frame object:
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
You can also read .xsl files and directly select the rows and columns you are interested in by setting parameters skiprows, usecols. Also, you can indicate index column by parameter index_col.
energy=pd.read_excel('Energy Indicators.xls', sheet_name='Energy',skiprows=8,usecols='E,G', index_col=None, na_values=['NA'])
For .txt files, you can also use read_csv function by defining the separation symbol:
university_towns=pd.read_csv('university_towns.txt',sep='\n',header=None)
See more about pandas io operations in http://pandas.pydata.org/pandas-docs/stable/io.html
Using os Module
Read .csv files:
import os
import csv
for file in os.listdir("objective_folder"):
with open('objective_folder/'+file, newline='') as csvfile:
rows = csv.reader(csvfile) # read csc file
for row in rows: # print each line in the file
print(row)
Read .xsl files:
import os
import xlrd
for file in os.listdir("objective_folder/"):
data = xlrd.open_workbook('objective_folder/'+file)
table = sheel_1 = data.sheet_by_index(0)#the first sheet in Excel
nrows = table.nrows #row number
for i in range(nrows):
if i == 0: # skip the first row if it defines variable names
continue
row_values = table.row_values(i) #read each row value
print(row_values)
Download from Website Automatically
We can also try to read data directly from url link. This time, the .csv file is compressed as housing.tgz. We need to download the file and then decompress it. So you can write a small function as below to realize it. It is a worthy effort because you can get the most recent data every time you run the function.
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = "datasets/housing"
HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + "/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
when you call fetch_housing_data(), it creates a datasets/housing directory in your workspace, downloads the housing.tgz file, and extracts the housing.csv from it in this directory.
Now let’s load the data using Pandas. Once again you should write a small function to load the data:
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
What’s more?
These methods are what I have met so far. In typical environments your data would be available in a relational database (or some other common datastore) and spread across multiple tables/documents/files. To access it, you would first need to get your credentials and access authorizations, and familiarize yourself with the data schema. I will supplement more methods if I encounter in the future.
[Machine Learning with Python] How to get your data?的更多相关文章
- 【Machine Learning】Python开发工具:Anaconda+Sublime
Python开发工具:Anaconda+Sublime 作者:白宁超 2016年12月23日21:24:51 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现 ...
- Python (1) - 7 Steps to Mastering Machine Learning With Python
Step 1: Basic Python Skills install Anacondaincluding numpy, scikit-learn, and matplotlib Step 2: Fo ...
- Getting started with machine learning in Python
Getting started with machine learning in Python Machine learning is a field that uses algorithms to ...
- 《Learning scikit-learn Machine Learning in Python》chapter1
前言 由于实验原因,准备入坑 python 机器学习,而 python 机器学习常用的包就是 scikit-learn ,准备先了解一下这个工具.在这里搜了有 scikit-learn 关键字的书,找 ...
- Machine Learning的Python环境设置
Machine Learning目前经常使用的语言有Python.R和MATLAB.如果采用Python,需要安装大量的数学相关和Machine Learning的包.一般安装Anaconda,可以把 ...
- [Machine Learning with Python] Familiar with Your Data
Here I list some useful functions in Python to get familiar with your data. As an example, we load a ...
- [Machine Learning with Python] My First Data Preprocessing Pipeline with Titanic Dataset
The Dataset was acquired from https://www.kaggle.com/c/titanic For data preprocessing, I firstly def ...
- [Machine Learning with Python] Data Preparation through Transformation Pipeline
In the former article "Data Preparation by Pandas and Scikit-Learn", we discussed about a ...
- [Machine Learning with Python] Data Preparation by Pandas and Scikit-Learn
In this article, we dicuss some main steps in data preparation. Drop Labels Firstly, we drop labels ...
随机推荐
- HTML5一些特殊效果分享地址集合
页面预加载图片原生js: http://www.cnblogs.com/st-leslie/articles/5274568.html HTML5 FileReader读取本地文件: http://n ...
- Thinkhphp5控制器调用的Model层的方法总结
控制器器里: <?php /** * Created by PhpStorm. * User: Haima * Date: 2018/7/8 * Time: 15:58 */ namespace ...
- HDU 5399 数学 Too Simple
题意:有m个1~n的映射,而且对于任意的 i 满足 f1(f2(...fm(i))) = i 其中有些映射是知道的,有些是不知道的,问一共有多少种置换的组合. 分析: 首先这些置换一定是1~n的一个置 ...
- Ext.js给form加背景图片
{ iconCls: 'zyl_icons_showdetail', tooltip: '查看', handler: function(gridView, rowIndex, colIndex) { ...
- python 学习分享-字典篇
python字典(Dictionary) dict是无序的 key必须是唯一切不可变的 a={'key1':'value1','key2':'value2'} 字典的增删改查 a['key3']='v ...
- Django Rest Framework 教程及API向导
Django Rest Framework 教程及API向导. 一.请求(Request)REST_FRAMEWORK 中的 Request 扩展了标准的HttpRequest,为 REST_FRAM ...
- Python3网络爬虫(三):urllib.error异常
运行平台:Windows Python版本:Python3.x IDE:Sublime text3 转载请注明作者和出处:http://blog.csdn.net/c406495762/article ...
- 设计模式(一)单例模式:1-饿汉模式(Eager)
思想: 饿汉模式是最常提及的2种单例模式之一,其核心思想,是类持有一个自身的 instance 属性,并且在申明的同时立即初始化. 同时,类将自身的构造器权限设为 private,防止外部代码创建对象 ...
- 【bzoj2333】[SCOI2011]棘手的操作 可并堆+STL-set
UPD:复杂度是fake的...大家还是去写启发式合并吧. 题目描述 有N个节点,标号从1到N,这N个节点一开始相互不连通.第i个节点的初始权值为a[i],接下来有如下一些操作: U x y: 加一条 ...
- BZOJ1856 [SCOI2010]生成字符串 【组合数】
题目 lxhgww最近接到了一个生成字符串的任务,任务需要他把n个1和m个0组成字符串,但是任务还要求在组成的字符串中,在任意的前k个字符中,1的个数不能少于0的个数.现在lxhgww想要知道满足要求 ...