SciTech-Mathematics-Probability+Statistics-7 Key Statistics Concepts
7 Key Statistics Concepts Every Data Scientist Must Master
BY BALA PRIYA CPOSTED ON AUGUST 9, 2024
Statistics is one of the must-have skills for all data scientists. But learning statistics can be quite the task.
That's why we put together this guide to help you understand essential statistics concepts for data science. This should give you an overview of the statistics you need to know as a data scientist and explore further on specific topics.
Let's get started.
1. Descriptive Statistics
Descriptive statistics provide a summary of the main features of a dataset for preliminary data analysis. Key metrics include measures of central tendency, dispersion, and shape.
Measures of Central Tendency
These metrics describe the center or typical value of a dataset:
- Mean: Average value, sensitive to outliers
- Median: Middle value, robust to outliers
- Mode: Most frequent value, indicating common patterns
Measures of Dispersion
These metrics describe data spread or variability:
- Range: Difference between highest and lowest values, sensitive to outliers
- Variance: Average squared deviation from the mean, indicating overall data spread.
- Standard deviation: Square root of variance, in the same unit as the data.
Low values indicate data points close to the mean, high values indicate widespread data.
Measures of Shape
These metrics describe the data distribution shape:
- Skewness: Asymmetry of the distribution; Positive for right-skewed, negative for left-skewed
- Kurtosis: "Tailedness" of the distribution;
High values indicate heavy tails(outliers), low values indicate light tails
Understanding these metrics is foundational for further statistical analysis and modeling, helping to characterize the distribution, spread, and central tendencies of your data.
2. Sampling Methods
You need to understand sampling for estimating population characteristics. When sampling, you should ensure that these samples accurately reflect the population. Let's go over the common sampling methods.
Random Sampling
Random sampling minimizes bias, ensuring the samples are representative enough. In this, you assign unique numbers to population members and use a random number generator to select the samples at random.
Stratified Sampling
Ensures representation of all subgroups. Stratified sampling divides population into homogeneous strata(such as age, gender) and randomly samples from each stratum proportional to its size.
Cluster Sampling
Cluster sampling is cost-effective for large, spread-out populations. In this, divide population into clusters (such as geographical areas), randomly select clusters, and sample all or randomly select members within chosen clusters.
Systematic Sampling
Systematic sampling is another technique that ensures evenly spread samples.
You assign unique numbers, determine sampling interval (k), randomly select a starting point, and select every k-th member.
Choosing the right sampling method ensures the design effectiveness of study and more representative samples. This in turn improves the reliability of conclusions.
3. Probability Distributions
Probability distributions represent the likelihood of different outcomes. When you’re starting out, you should learn about the normal, binomial, poisson, and exponential distributions—each with its properties and applications.
Normal Distribution
Many real-world distributions follow normal distribution which has the following properties:
Symmetric around the mean, with mean, median, and mode being equal. The normal distribution is characterized by mean (µ) and standard deviation (σ).
As an empirical rule, ~68% of data within one standard deviation, ~95% within two, and ~99.7% within three.
It’s also important to talk about Central Limit Theorem (CLT) when talking about normal distributions. In simple terms, the CLT states that with a large enough sample size, the sampling distribution of the sample mean approximates a normal distribution.
Binomial Distribution
Binomial distribution is used to model the expected number of successes in n independent Bernoulli trials. Each binomial trial has only two possible outcomes: success or failure. The binomial distribution is:
Defined by the probability of success (p)
Suitable for binary outcomes like yes/no or success/failure
Poisson Distribution
Poisson distribution is generally used to model the number of events occurring within a fixed interval of time. It’s especially suited for rare events and has the following properties:
Events are independent and have a fixed average rate (λ) of occurrence
Useful for counting events over continuous domains (time, area, volume)
Exponential Distribution
The exponential distribution is continuous and is used to model the time between events in a Poisson process.
The exponential distribution is:
Characterized by the rate parameter (λ) (which is the inverse of the mean)
Memoryless, meaning the probability of an event occurring in the future is independent of the past
Understanding these distributions helps in modeling various types of data.
4. Hypothesis Testing
Hypothesis testing is a method to make inferences on the population from sample data, determining if there is enough evidence to support a certain condition.
The \(\large Null\ Hypothesis (H0)\) assumes no effect or difference.
Example: Hypothesis that a new drug has no effect on recovery time compared to an existing drug.
The \(\large Alternative\ Hypothesis (H1)\) assumes an effect exists.
A new drug reduces recovery time compared to an existing drug.
\(\large P-value\) is the probability of obtaining results at least as extreme as observed,
assuming H0 is true.
- Low \(\large p-value\) (say ≤ 0.05): Strong evidence against H0; Reject H0.
- High $\large p-value (say > 0.05): Weak evidence against H0; Do not reject H0.
You should also be aware of Type I and Type II errors:
- Type I Error (\(\large \alpha\)): Rejecting H0 when it is true.
Such as concluding the drug is effective when it is not. - Type II Error (\(\large \beta\)): Not rejecting H0 when it is false.
Such as concluding the drug has no effect when it actually does.
The general procedure for hypothesis testing can be summed up as follows:
hypothesis-testing
5. Confidence Intervals
A confidence interval (CI) is a range of values derived from sample data,
that is likely to contain the true population parameter.
The confidence level (e.g., 95%) represents the frequency with which the true population parameter would fall within the calculated interval if the experiment were repeated multiple times.
A 95% CI means we are 95% confident that the true population mean lies within the interval.
Suppose the 95% confidence interval for the average price of houses in the city is 64.412K to 65.588K. This means that we are 95% confident that the true average price of all houses in the city lies within this range.
6. Regression Analysis
You should also learn regression analysis to model relationships between a dependent variable and one or more independent variables.
Linear regression models the linear relationship between a dependent variable and an independent variable.
You can use multiple regression to include multiple independent variables. It models the relationship between one dependent variable and two or more independent variables.
Check out Step-by-Step Guide to Linear Regression in Python to learn more about building regression models in Python.
Understanding regression is, therefore, fundamental for predictive modeling and forecasting problems.
7. Bayesian Statistics
Bayesian statistics provides a probabilistic approach to inference, updating beliefs about parameters or hypotheses based on prior knowledge and observed data. Key concepts include Bayes’ theorem, prior distribution, and posterior distribution.
The Bayes' theorem updates the probability of a \(\large hypothesis\ H\) given new \(\large evidence\ E\) :
- \(\large P(H | E)\) : Posterior probability of H given E
- \(\largeP(E | H)\) : Likelihood of E given H
- \(\large P(H)\) : Prior probability of H
- \(\large P(E)\) : Probability of E
The prior distribution represents initial information about a parameter before observing data.
The posterior distribution is the updated probability distribution after considering observed data.
Wrapping Up
I hope you found this guide helpful. This is not an exhaustible list of stats concepts for data science, but it should serve as a good starting point.
If you’re interested in a step-by-step guide to learn statistics, check out 7 Steps to Mastering Statistics for Data Science.
SciTech-Mathematics-Probability+Statistics-7 Key Statistics Concepts的更多相关文章
- Create STATISTICS,UPDATE STATISTICS
该命令在一张表或者索引了的视图上更新查询优化统计数字信息. 默认情况下, 查询优化器已经更新了必要的用来提高查询计划的统计信息; 在某些情况下, 你可以通过使用UPDATE STATISTICS 命令 ...
- Solr In Action 笔记(1) 之 Key Solr Concepts
Solr In Action 笔记(1) 之 Key Solr Concepts 题记:看了下<Solr In Action>还是收益良多的,只是奈何没有中文版,只能查看英语原版有点类,第 ...
- [Hive - LanguageManual] Statistics in Hive
Statistics in Hive Statistics in Hive Motivation Scope Table and Partition Statistics Column Statist ...
- Statistics : Data Distribution
1.Normal distribution In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) dist ...
- What Is Mathematics?
What Is Mathematics? The National Council of Teachers of Mathematics (NCTM), the world's largest org ...
- How do I learn mathematics for machine learning?
https://www.quora.com/How-do-I-learn-mathematics-for-machine-learning How do I learn mathematics f ...
- How do I learn machine learning?
https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644 How Can I Learn X? ...
- What are some good books/papers for learning deep learning?
What's the most effective way to get started with deep learning? 29 Answers Yoshua Bengio, ...
- Machine Learning and Data Mining(机器学习与数据挖掘)
Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...
- 【转】The most comprehensive Data Science learning plan for 2017
I joined Analytics Vidhya as an intern last summer. I had no clue what was in store for me. I had be ...
随机推荐
- EFCore(五)——多个DBContext的Code First指定对应的DBContext更新
此环境为ASP.NET Core的项目 1.在需要更新的DBContext里添加空的构造函数 2.打开Nuget命令行选择对应的目录位置 3.带参数-Context指定对应的DBContext 1. ...
- 5.3K star!硅基生命新纪元,这个开源数字人框架要火!
嗨,大家好,我是小华同学,关注我们获得"最新.最全.最优质"开源项目和高效工作学习方法 "只需3分钟视频素材,就能打造专属数字分身!""开源免费商用, ...
- CatBoost算法原理及Python实现
一.概述 CatBoost 是在传统GBDT基础上改进和优化的一种算法,由俄罗斯 Yandex 公司开发,于2017 年开源,在处理类别型特征和防止过拟合方面有独特优势. 在实际数据中,存在大 ...
- c# 批量注入示例代码
using Microsoft.Extensions.DependencyInjection; using System; using System.Linq; using System.Reflec ...
- 基于onnxruntime结合PyQt快速搭建视觉原型Demo
我在日常工作中经常使用PyQt和onnxruntime来快速生产demo软件,用于展示和测试,这里,我将以Yolov12为例,展示一下我的方案. 首先我们需要使用Yolov12训练一个模型,并 ...
- 王炸!SpringBoot+MCP 让你的系统秒变AI小助手
王炸!SpringBoot+MCP 让你的系统秒变AI小助手 感觉本篇对你有帮助可以关注一下我的微信公众号(深入浅出谈java),会不定期更新知识和面试资料.技巧!!! 一.MCP 是什么? MCP( ...
- java从小白到老白①
计算机:由软件和硬件组成.其中没有安装任何软件的电脑称为裸机 (1)计算机硬件是指系统中由电子,机械和光电元件等组成的各种物理装置的总称.这些物理装置按系统结构的要求构成一个有机整体为计算机软件提供物 ...
- 红杉AI闭门会:AI 不再卖工具,而是卖收益
AI创业失败,经验教训分享可私聊... 近来,AI圈最值得关注的应该是在旧金山召开的红杉资本AI峰会. 敏感的同学会清楚,钱在哪里,哪里就有发展,如果能迎合资本市场,那就有可能活得很好,所以我们今天就 ...
- LogStash输入插件详解
概述 官方文档:https://www.elastic.co/guide/en/logstash/7.17/input-plugins.html 输入插件使 Logstash 能够读取特定的事件源. ...
- centos上redis的安装
官网教程 redis安装官网 https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/ 可以右下方看到安装到各 ...