Improvement can be done in fulture:
1. the algorithm of constructing network from distance matrix. 
2. evolution of sliding time window
3. the later processing or visual analysis of generated graphs.

Thinking:

1.What's the ground truth in load profiles?

For clustering, there's no ground truth, so how to tune the parameters or options in step2, step3 and step4? In this paper, they have the labels of time series, so they use RI to guide their selection of parameters, for example: k and \epsilon.

Suppose: similar time series tend to connect to each other and form communities.

Background and related works

shaped based distance measures; feature based distance measures; structure based distance measures. time series clustering; community detection in networks.

Methodology

  1. data normalization
  2. time series distance calculation
  3. network construction
  4. community detection

Which step influence the clustering results:

distance calculation algorithm; network construction methods. community detection methods.

2. distance matrix

calculating the distance for each pair of time series in the data set and construct a distance matrix D, where dij is the distance between series Xi and XJ . A good choice of distance measure has strong influence on the network construction and clustering result.

3. network construction

Two common method: K-NN and \epsilon-NN;  EXPLORATION

Experiments

45 time series data sets.

Purpose: check the performance of each combination of step2, step3,and step4 to each data sets.

Index指标:Rand index.

Vary the parameters: the k of k-NN from 1 to n-1;  the epsilon of epsilon-NN from min(D) to max(D) in 100 steps.

Step2: Manhattan, Euclidean, infinite Norm, DTW, short time series, DISSIM, Complexity-Invariant, Wavlet tranform, Pearson correlation, Intergrated periodogram.

Step3: fast greedy; multilevel; walktrap; infomap; label propagration.

Step4: vary the parameter of k and \epsilon.

Results

1. the effect of k and \epsilon to the clustering results(RI).

The k-NN construction method just allows discrete values of k while the ε-NN method accepts continuous values. When k and ε are small, vertices tend to make just few connections.

??what's the meaning of A,B,C,D in figure 5.

2. the statistical test of the effect of different distance methods. Friedman test and Nemenyi test.

多个算法在多个数据库上的对比:

  • 如果样本符合ANOVA(repeated measure)的假设(如正态、等方差),优先使用ANOVA。
  • 如果样本不符合ANOVA的假设,使用Friedman test配合Nemenyi test做post-hoc。
  • 如果样本量不一样,或因为特定原因不能使用Friedman-Nemenyi,可以尝试Kruskal Wallis配合Dunn's test。值得注意的是,这种方法是用来处理独立测量数据,要分情况讨论。

DTW measure presents the best results for both network construction methods.

3. the statistical test of the effect of community detection algorithms. Friedman test and Nemenyi test.

4. comparison to rival methods.

i. some classic clustering algorithms: k-medoids, complete-linkage, single-linkage, average-linkage, median-linkage, centroid-linkage and diana;

ii. three up-to-date ones: Zhang’s method [41], Maharaj’s method [24] and PDC [5]

5. detect time series clusters with time-shifts

Suppose: Clustering algorithms should be capable of detecting groups of time series that have similar variations in time.

CBF dataset: 30个序列,一共三组, 全部正确分组/clustering.

6. detect shape patterns

1000 time series of length 128, four groups.

detect shape patterns (UD, DD, DU, UU);

Discussion

1. the same idea can be extended to multivariate time series clustering.

2. evaluate the simulation results using different indexes.

3. As future works, we plan to propose automatic strategies for choosing the best number of neighbors (k and ε) and speeding up the network construction method, instead of using the naive method.

4. We also plan to apply the idea to solve other kinds of problems in time series analysis, such as time series prediction.   ??

Supplementary knowledge: 

1. box plot

它能显示出一组数据的最大值最小值中位数、及上下四分位数

以下是箱形图的具体例子:

                            +-----+-+
* o |-------| + | |---|
+-----+-+ +---+---+---+---+---+---+---+---+---+---+ 分数
0 1 2 3 4 5 6 7 8 9 10

这组数据显示出:

  • 最小值(minimum)=5
  • 下四分位数(Q1)=7
  • 中位数(Med --也就是Q2)=8.5
  • 上四分位数(Q3)=9
  • 最大值(maximum )=10
  • 平均值=8
  • 四分位间距(interquartile range)={\displaystyle (Q3-Q1)}=2 (即ΔQ)

2. 观念转变, experiment部分也很重要,不是可有可无的, 要细看。

3. 统计学检验

常用的机器学习算法比较?

All models are wrong, but some are useful. ----------统计学家George Box.

4. univariate and multivariate time series. 

Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a one-dimensional value, which is the temperature.

Multivariate time series: Multiple variables are varying over time. For example, a tri-axial accelerometer三轴加速器. There are three accelerations, one for each axis (x,y,z) and they vary simultaneously over time.

Considering the data you showed in the question, you are dealing with a multivariate time series, where value_1value_2 andvalue_3 are three variables changing simultaneously over time.

PP: Time series clustering via community detection in Networks的更多相关文章

  1. PP: Learning representations for time series clustering

    Problem: time series clustering TSC - unsupervised learning/ category information is not available. ...

  2. 【论文阅读】A practical algorithm for distributed clustering and outlier detection

    文章提出了一种分布式聚类的算法,这是第一个有理论保障的考虑离群点的分布式聚类算法(文章里自己说的).与之前的算法对比有以下四个优点: 1.耗时短O(max{k,logn}*n), 2.传递信息规模小: ...

  3. 论文解读(CGC)《CGC: Contrastive Graph Clustering for Community Detection and Tracking》

    论文信息 论文标题:CGC: Contrastive Graph Clustering for Community Detection and Tracking论文作者:Namyong Park, R ...

  4. A Node Influence Based Label Propagation Algorithm for Community detection in networks 文章算法实现的疑问

    这是我最近看到的一篇论文,思路还是很清晰的,就是改进的LPA算法.改进的地方在两个方面: (1)结合K-shell算法计算量了节点重重要度NI(node importance),标签更新顺序则按照NI ...

  5. LabelRank非重叠社区发现算法介绍及代码实现(A Stabilized Label Propagation Algorithm for Community Detection in Networks)

    最近在研究基于标签传播的社区分类,LabelRank算法基于标签传播和马尔科夫随机游走思路上改装的算法,引用率较高,打算将代码实现,便于加深理解. 这个算法和Label Propagation 算法不 ...

  6. PP: Time series anomaly detection with variational autoencoders

    Problem: unsupervised anomaly detection Model: VAE-reEncoder VAE with two encoders and one decoder. ...

  7. [Localization] R-CNN series for Localization and Detection

    CS231n Winter 2016: Lecture 8 : Localization and Detection CS231n Winter 2017: Lecture 11: Detection ...

  8. PP: Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data

    From: Stanford University; Jure Leskovec, citation 6w+; Problem: subsequence clustering. Challenging ...

  9. 关于目标检测(Object Detection)的文献整理

    本文对CV中目标检测子方向的研究,整理了如下的相关笔记(持续更新中): 1. Cascade R-CNN: Delving into High Quality Object Detection 年份: ...

随机推荐

  1. logback日志的基本使用

    logback的日志使用,有两种方式,可以在application.yml文件中配置,不过最常见的还是用一个单独的xml配置文件进行配置: 一.application.yml配置方式 logging: ...

  2. 通过vsphere给esxi添加本地硬盘

    公司ESXi服务器的硬盘空间不够使用,现在新加了一块硬盘在ESxi服务器上.在服务器上添加完硬盘后,在Vsphere上是看不到新加硬盘的. 下面我们来通过虚拟机模拟该情况,先添加一块硬盘.如下图: 在 ...

  3. Spring Mvc Http 400 Bad Request问题排查

    如果遇到了Spring MVC报错400,而且没有返回任何信息的情况下该如何排查问题? 问题描述 一直都没毛病的接口,今天测试的时候突然报错400 Bad Request,而且Response没有返回 ...

  4. qt creator源码全方面分析(2-2)

    目录 Common Extension Tasks Common Extension Tasks 本节总结了可用于将UI组件添加到Qt Creator的API函数. 任务 详细 API 添加菜单或菜单 ...

  5. 【sklearn】Toy datasets上的分类/回归问题 (XGBoost实践)

    分类问题 1. 手写数字识别问题 from sklearn.datasets import load_digits digits = load_digits() # 加载手写字符识别数据集 X = d ...

  6. 数字孪生 VS 平行系统

    数字孪生和平行系统作为新兴技术,在解决当今人工智能邻域面临的信息量大,干扰信息不确定因素多,与人的参与沟通更加紧密,人机互动更加重视,为了使人们有更好的体验人工智能带来的便利,急需推动信息物理社会的高 ...

  7. springboot web - 启动(2) run()

    接上一篇 在创建 SpringApplication 之后, 调用了 run() 方法. public ConfigurableApplicationContext run(String... arg ...

  8. JavaSE学习笔记(6)---异常

    JavaSE学习笔记(6)---异常 ​ 软件程序在运行过程中,非常可能遇到问题,我们称之为异常,英文是:Exception,意思是例外.遇到这些例外情况,或者叫异常,我们怎么让写的程序做出合理的处理 ...

  9. 复习mongoose的基本使用

    mongodb参考 mongoose官网 mongoose用起来更便捷,更方便些

  10. 汇编语言中LABEL伪指令的功能?

    LABEL 一般用作定义变量和标号的属性,它是与紧接着的下一条变量和标号定义语句相关的,其类型可以为BYTE.WORD.DWORD.QWORD.NEAR.FAR等等.用法为:buffer(变量) LA ...