PP: Deep r -th Root of Rank Supervised Joint Binary Embedding for Multivariate Time Series Retrieval
from: Dacheng Tao 悉尼大学
PROBLEM:
time series retrieval: given the current multivariate time series segment, how to obtain its relevant time series segments in the historical data.
Two challenging:
1. it requires a compact representation of the raw time series, which can explicitly encode the temporal dynamics as well as the correlations (interactions) between different pairs of time series.
2. 查询相似结果的快速性和准确性。
Compact representation: temporal dynamics + correlations
INTRODUCTION:
问题定义: given the current multivariate time series segment, i.e., a slice of multivariate time series which lasts for a short period of time, we aim to find its most similar time series segments in the historical data (or database).
A supervised multivariate time series retrieval problem. label information is available in historical data.
other methods: discrete Fourier transform; discrete wavelet transform; piecewise aggregate approximation; 但是这些方法仅仅针对univariate time series representation and ignore the correlations between different pairs.
?? 不同序列间的相关性也要compact?? 由于是一个窗口内的multivariate time series, 需要衡量他们之间的correlation.
time serie作为一个独立的个体,如果想研究他们之间的correlations:
1. time series ----> compact representation -----> correlations
2. time series ----> correlation -----> compact representation
To speed up the expensive similarity search。
purpose: multivariate time series retrieval.
input: a raw multivariate time series segment
steps:
- employ lstm units to encode the temporal dynamics
- use cnn to encode the correlations between different pairs of ts
- generated two separate feature vectors from the first two steps.
- two separate feature vectors ----> a joint binary embedding
- calculate the similarity between two multivariate ts segments in Hamming space.
- r-th root ranking loss to train the disciplined embedding functions.
DEEP r-TH ROOT OF RANK SUPERVISED JOINT BINARY EMBEDDING
1. multivariate time series ----> lstm -----> the last hidden state ht
2. multivariate time series ---> correlation matrix -----> cnn ------> fully connected layer, l
3. joint binary embedding: y = [ht, l]; hash function/ embedding ----> Hv
4. 相比于pairwise similarities,我们使用了segment similarities in the form of triplets. {(Xq,Xi,Xj)}
yq: a query segment, yi: similar segment; yj: dissimilar segment;
就我目前看来,只是根据r-th ranking loss进行了训练,输入是{(Xq,Xi,Xj)}。但是最终如何检索的,还是不知道。
EXPERIMENTS
To measure the effectiveness of various binary embedding techniques for multivariate time series retrieval, we consider three evaluation metrics, i.e., Mean Average Precision (MAP), precision at top-k positions (Precision@k), and recall at top-k positions (Recall@k).
结果看起来很不错。
SUPPLEMENTARY KNOWLEDGE:
1. hamming distance: 是两个字符串对应位置的不同字符的个数。
例如:
- 10101与10101之间的汉明距离是2。
- 2396与2396之间的汉明距离是3。
- "toned"与"roses"之间的汉明距离是3。
2. triplet loss
Triplet loss is a loss function for artificial neural networks where a baseline (anchor) input is compared to a positive (truthy) input and a negative (falsy) input. The distance from the baseline (anchor) input to the positive (truthy) input is minimized, and the distance from the baseline (anchor) input to the negative (falsy) input is maximized.[1][2]
PP: Deep r -th Root of Rank Supervised Joint Binary Embedding for Multivariate Time Series Retrieval的更多相关文章
- PP: Deep clustering based on a mixture of autoencoders
Problem: clustering A clustering network transforms the data into another space and then selects one ...
- PP: Robust Anomaly Detection for Multivariate Time Series through Stochastic Recurrent Neural Network
PROBLEM: OmniAnomaly multivariate time series anomaly detection + unsupervised 主体思想: input: multivar ...
- 基于图嵌入的高斯混合变分自编码器的深度聚类(Deep Clustering by Gaussian Mixture Variational Autoencoders with Graph Embedding, DGG)
基于图嵌入的高斯混合变分自编码器的深度聚类 Deep Clustering by Gaussian Mixture Variational Autoencoders with Graph Embedd ...
- PP: Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data
From: Stanford University; Jure Leskovec, citation 6w+; Problem: subsequence clustering. Challenging ...
- HDU 3966(树链剖分+点修改+点查询)
题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=3966 题目大意:营地的分布成树型.每个营地都有一些人,每次修改修改一条链上的所有营地的人数,每次查询单 ...
- 108. Convert Sorted Array to Binary Search Tree (building tree with resursion)
Given an array where elements are sorted in ascending order, convert it to a height balanced BST. Fo ...
- 论文翻译:2020_DCCRN: Deep Complex Convolution Recurrent Network for Phase-Aware Speech Enhancement
论文地址:DCCRN:用于相位感知语音增强的深度复杂卷积循环网络 论文代码:https://paperswithcode.com/paper/dccrn-deep-complex-convolutio ...
- Awesome Deep Vision
Awesome Deep Vision A curated list of deep learning resources for computer vision, inspired by awes ...
- 【HDOJ】5096 ACM Rank
Treap+set仿函数重定义.每当ac一道题目时,相当于对总时间减去一个大数. /* 5096 */ #include <iostream> #include <string> ...
随机推荐
- R12客户表结构分析
客户表/联系人/PARTY关联 HZ_PARTIES 客户账户表 HZ_CUST_ACCOUNTS 例子: select hp.party_number --客户注册标识 ...
- Android中Chronometer计时器的简单使用
场景 实现效果如下 注: 博客: https://blog.csdn.net/badao_liumang_qizhi 关注公众号 霸道的程序猿 获取编程相关电子书.教程推送与免费下载. 实现 将布局改 ...
- author模块
一.auth模块简介 1.什么是auth模块,auth模块主要是解决什么问题 还是那句话,‘没有无缘无故的爱,也没有无缘无故的恨 凡是必有因’, 像我们开发一个网站,不可避免的设计网络用户系统,比 ...
- 微软帮助类SqlHelper
using System; using System.Data; using System.Xml; using System.Data.SqlClient; using System.Collect ...
- SpringCloud踩坑
今天在使用 SpringCloud 时遇到了一个问题,感觉有不少小伙伴会遇到,所以记录下来 版本说明 SpringBoot 2.2.4.RELEASE SpringCloud Greenwich.SR ...
- layui导出表格设置常用函数
1.设置导出单元格为数字格式 字段名: function (value, line, data) { return { v: value, t: 'n' } }
- maven第一次创建项目太慢解决方法
问题: 第一次用maven创建项目的时候,因为本地仓库中没有jar包,需要从中央仓库下载,所以会比较慢 解决方法: 因为从中央仓库下载默认使用的国外的镜像下载,速度比较慢,我们可以把镜像修改为从阿里云 ...
- Dijkstra+SPFA 模板
Dijkstra 引用自:点击打开链接 #include <algorithm> #include <cstdio> #include <cstring> #inc ...
- webkit 技术内幕 笔记 三
浏览器内核及特性 在浏览器中,一个很重要的模块,是将页面转变成可视化的图像结果,这就是浏览器的内核,通常被称作渲染引擎.渲染:就是根据描述或者定义构建数学模型,通过模型生成图像的过程.浏览器的渲染引擎 ...
- 【巨杉数据库SequoiaDB】为“战疫” 保驾护航,巨杉在行动
2020年,我们经历了一个不平静的新春,在这场大的“战疫”中,巨杉数据库也积极响应号召,勇于承担新一代科技企业的社会担当,用自己的行动助力这场疫情防控阻击战! 赋能“战疫”快速响应 巨杉数据库目前服务 ...