Seven Python Tools All Data Scientists Should Know How to Use

If you’re an aspiring data scientist, you’re inquisitive – always exploring, learning, and asking questions. Online tutorials and videos can help you prepare you for your first role, but the best way to ensure that you’re ready to be a data scientist is by making sure you’re fluent in the tools people use in the industry.

I asked our data science faculty to put together seven python tools that they think all data scientists should know how to use. The Galvanize Data Science and GalvanizeU programs both focus on making sure students spend ample time immersed in these technologies, investing the time to gain a deep understanding of these tools will give you a major advantage when you apply for your first job. Check them out below:

IPython

IPython is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers enhanced introspection, rich media, additional shell syntax, tab completion, and rich history. IPython provides the following features:

  • Powerful interactive shells (terminal and Qt-based)
  • A browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media
  • Support for interactive data visualization and use of GUI toolkits
  • Flexible, embeddable interpreters to load into one’s own projects
  • Easy to use, high performance tools for parallel computing

Contributed by Nir Kaldero, Director of Science, Head of Galvanize Experts

GraphLab Create

GraphLab Create is a Python library, backed by a C++ engine, for quickly building large-scale, high-performance data products.

Here are a few of the features of GraphLab Create:

  • Ability to analyze terabyte scale data at interactive speeds, on your desktop
  • A Single platform for tabular data, graphs, text, and images
  • State of the art machine learning algorithms including deep learning, boosted trees, and factorization machines
  • Run the same code on your laptop or in a distributed system, using a Hadoop Yarn or EC2 cluster
  • Focus on tasks or machine learning with the flexible API
  • Easily deploy data products in the cloud using Predictive Services
  • Visualize data for exploration and production monitoring

Contributed by Benjamin Skrainka, Lead Data Science Instructor at Galvanize

Pandas

pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. Python has long been great for data munging and preparation, but less so for data analysis and modeling. pandas helps fill this gap, enabling you to carry out your entire data analysis workflow in Python without having to switch to a more domain specific language like R.

Combined with the excellent IPython toolkit and other libraries, the environment for doing data analysis in Python excels in performance, productivity, and the ability to collaborate. pandas does not implement significant modeling functionality outside of linear and panel regression; for this, look to statsmodels and scikit-learn. More work is still needed to make Python a first class statistical modeling environment, but we are well on our way toward that goal.

Contributed by Nir Kaldero, Director of Science, Head of Galvanize Experts

PuLP

Linear Programming is a type of optimisation where an objective function should be maximised given some constraints. PuLP is an Linear Programming modeler written in python. PuLP can generate LP files and call on use highly optimized solvers, GLPK, COIN CLP/CBC, CPLEX, and GUROBI, to solve these linear problems.

Contributed by Isaac Laughlin, Data Science Instructor at Galvanize

Matplotlib

matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell (ala MATLAB® or Mathematica®), web application servers, and six graphical user interface toolkits.

matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc, with just a few lines of code.

For simple plotting the pyplot interface provides a MATLAB-like interface, particularly when combined with IPython. For the power user, you have full control of line styles, font properties, axes properties, etc, via an object oriented interface or via a set of functions familiar to MATLAB users.

Contributed by Mike Tamir, Chief Science Officer at Galvanize

Scikit-Learn

Scikit-Learn is a simple and efficient tool for data mining and data analysis. What is so great about it is that it’s accessible to everybody, and reusable in various contexts. It is built on NumPy,SciPy, and mathplotlib. Scikit is also an open source that is commercially usable – BSD licence. Scikit-Learn has the following features:

  • Classification – Identifying to which category an object belongs to
  • Regression – Predicting a continuous-valued attribute associated with an object
  • Clustering – Automatic grouping of similar objects into sets
  • Dimensionality Reduction – Reducing the number of random variables to consider
  • Model Selection – Comparing, validating and choosing parameters and models
  • Preprocessing – Feature extraction and normalization

Contributed by Isaac Laughlin, Data Science Instructor at Galvanize

Spark

Spark consists of a driver program that runs the user’s main function and executes various parallel operations on a cluster. The main abstraction Spark provides is a resilient distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. RDDs are created by starting with a file in the Hadoop file system (or any other Hadoop-supported file system), or an existing Scala collection in the driver program, and transforming it. Users may also ask Spark to persist an RDD in memory, allowing it to be reused efficiently across parallel operations. Finally, RDDs automatically recover from node failures.

A second abstraction in Spark is shared variables that can be used in parallel operations. By default, when Spark runs a function in parallel as a set of tasks on different nodes, it ships a copy of each variable used in the function to each task. Sometimes, a variable needs to be shared across tasks, or between tasks and the driver program. Spark supports two types of shared variables: broadcast variables, which can be used to cache a value in memory on all nodes, and accumulators, which are variables that are only “added” to, such as counters and sums.

Contributed by Benjamin Skrainka, Lead Data Science Instructor at Galvanize

Still hungry for more data science? Enter our data science giveaway for a chance to win tickets awesome conferences like PyData Seattle and the Data Science Summit, or get discounts on Python resources like Effective Python and Data Science from Scratch.

Seven Python Tools All Data Scientists Should Know How to Use的更多相关文章

  1. 7 Tools for Data Visualization in R, Python, and Julia

    7 Tools for Data Visualization in R, Python, and Julia Last week, some examples of creating visualiz ...

  2. Why Apache Spark is a Crossover Hit for Data Scientists [FWD]

    Spark is a compelling multi-purpose platform for use cases that span investigative, as well as opera ...

  3. Software development skills for data scientists

    Software development skills for data scientists Data scientists often come from diverse backgrounds ...

  4. Python Tools for Machine Learning

    Python Tools for Machine Learning Python is one of the best programming languages out there, with an ...

  5. Python tools for Visual Studio插件介绍

          Python tools for Visual Studio是一个免费开源的VisualStudio的插件,支持 VisualStudio 2010,2012与2013.我们想要实现的是: ...

  6. visual studio 2015使用python tools远程调试maya 2016

    步骤: 1. 去https://apps.exchange.autodesk.com/MAYA/en/Home/Index搜索Developer Kit并下载,maya 2016可以直接点击这里下载. ...

  7. arcgis python arcpy add data script添加数据脚本

    arcgis python arcpy add data script添加数据脚本mxd = arcpy.mapping.MapDocument("CURRENT")... df ...

  8. 8 Productivity hacks for Data Scientists & Business Analysts

    8 Productivity hacks for Data Scientists & Business Analysts Introduction I was catching up with ...

  9. The 10 Statistical Techniques Data Scientists Need to Master

    原文 就我个人所知有太多的软件工程师尝试转行到数据科学家而盲目地使用机器学习框架来处理数据,例如,TensorFlow或者Apache Spark,但是对于这些框架背后的统计理论没有完全的理解.所以提 ...

随机推荐

  1. 关于Collections中的sort()方法总结

    用Java集合中的Collections.sort方法对list排序的两种方法 本文部分引用自:http://my.oschina.net/leoson/blog/131904 用Collection ...

  2. JAXB - Hello World with Namespace

    如果元素带有命名空间,那么处理方式与 JAXB - Hello World 会略有不同. 1. XML Schema: <xsd:schema xmlns:xsd="http://ww ...

  3. nginx 错误日志分析 以及说明

    1.日志简介 nginx日志主要有两种:访问日志和错误日志.访问日志主要记录客户端访问nginx的每一个请求,格式可以自定义:错误日志主要记录客户端访问nginx出错时的日志,格式不支持自定义.两种日 ...

  4. 【C#4.0图解教程】笔记(第19章~第25章)

    第19章 泛型 1.泛型概念 泛型提供了一种更准确地使用有一种以上的类型的代码的方式. 泛型允许我们声明类型参数化的代码,我们可以用不同的类型进行实例化. 泛型不是类型,而是类型的模板.   2.声明 ...

  5. GIT学习(二)

    学习地址: http://www.liaoxuefeng.com/wiki/0013739516305929606dd18361248578c67b8067c8c017b000 常用git命令: 1. ...

  6. THP Transparent HugePages关闭

    ambari 安装Hortonworks HDP 时在检测host异常 The following hosts have Transparent Huge Pages (THP) enabled.TH ...

  7. ajax详解,以及异步JSOP的实现

    这里我使用的是jquery的ajax方法   包括三个方法 : get() , post(),   getJson() get() 和post()的格式我就使用一下格式,很方便: $.ajax({ u ...

  8. 存储占用:Memory Map 汉化去广告版

    转载说明 本篇文章可能已经更新,最新文章请转:http://www.sollyu.com/storage-occupancy-memory-map-localization-to-billboards ...

  9. MFC通过ADO操作Access数据库

    我在<VC知识库在线杂志>第十四期和第十五期上曾发表了两篇文章——“直接通过ODBC读.写Excel表格文件”和“直接通过DAO读.写Access文件”,先后给大家介绍了ODBC和DAO两 ...

  10. Yii 验证码验证

    控制器如下