Music information retrieval - Wikipedia https://en.wikipedia.org/wiki/Music_information_retrieval

Music information retrieval (MIR) is the interdisciplinary science of retrieving information from music. MIR is a small but growing field of research with many real-world applications. Those involved in MIR may have a background in musicologypsychoacousticspsychology, academic music study, signal processinginformaticsmachine learningcomputational intelligence or some combination of these.

Applications

MIR is being used by businesses and academics to categorize, manipulate and even create music.

Recommender systems

Several recommender systems for music already exist, but surprisingly few are based upon MIR techniques, instead making use of similarity between users or laborious data compilation. Pandora, for example, uses experts to tag the music with particular qualities such as "female singer" or "strong bassline". Many other systems find users whose listening history is similar and suggests unheard music to the users from their respective collections. MIR techniques for similarity in music are now beginning to form part of such systems.

Track separation and instrument recognition

Track separation is about extracting the original tracks as recorded, which could have more than one instrument played per track. Instrument recognition is about identifying the instruments involved and/or separating the music into one track per instrument. Various programs have been developed that can separate music into its component tracks without access to the master copy. In this way e.g. karaoke tracks can be created from normal music tracks, though the process is not yet perfect owing to vocals occupying some of the same frequency space as the other instruments.

Automatic music transcription

Automatic music transcription is the process of converting an audio recording into symbolic notation, such as a score or a MIDI file.[1] This process involves several audio analysis tasks, which may include multi-pitch detection, onset detection, duration estimation, instrument identification, and the extraction of harmonic, rhythmic or melodic information. This task becomes more difficult with greater numbers of instruments and a greater polyphony level.

Automatic categorization

Musical genre categorization is a common task for MIR and is the usual task for the yearly Music Information Retrieval Evaluation eXchange(MIREX).[2] Machine learning techniques such as Support Vector Machines tend to perform well, despite the somewhat subjective nature of the classification. Other potential classifications include identifying the artist, the place of origin or the mood of the piece. Where the output is expected to be a number rather than a class, regression analysis is required.

Music generation

The automatic generation of music is a goal held by many MIR researchers. Attempts have been made with limited success in terms of human appreciation of the results.

Methods used

Data source

Scores give a clear and logical description of music from which to work, but access to sheet music, whether digital or otherwise, is often impractical. MIDI music has also been used for similar reasons, but some data is lost in the conversion to MIDI from any other format, unless the music was written with the MIDI standards in mind, which is rare. Digital audio formats such as WAV,mp3, and ogg are used when the audio itself is part of the analysis. Lossy formats such as mp3 and ogg work well with the human ear but may be missing crucial data for study. Additionally some encodings create artifacts which could be misleading to any automatic analyser. Despite this the ubiquity of the mp3 has meant much research in the field involves these as the source material. Increasingly, metadata mined from the web is incorporated in MIR for a more rounded understanding of the music within its cultural context, and this recently includes analysis of social tagsfor music.

Feature representation[edit]

Analysis can often require some summarising,[3] and for music (as with many other forms of data) this is achieved by feature extraction, especially when the audio content itself is analysed and machine learning is to be applied. The purpose is to reduce the sheer quantity of data down to a manageable set of values so that learning can be performed within a reasonable time-frame. One common feature extracted is the Mel-Frequency Cepstral Coefficient (MFCC) which is a measure of the timbre of a piece of music. Other features may be employed to represent the key, chords, harmonies, melody, main pitch, beats per minute or rhythm in the piece. There are a number of available audio feature extraction tools[4] Available here

Statistics and machine learning

  • Computational methods for classification, clustering, and modelling — musical feature extraction for mono- and polyphonic music, similarity and pattern matching, retrieval
  • Formal methods and databases — applications of automated music identification and recognition, such as score following, automatic accompaniment, routing and filtering for music and music queries, query languages, standards and other metadata or protocols for music information handling and retrievalmulti-agent systems, distributed search)
  • Software for music information retrieval — Semantic Web and musical digital objects, intelligent agents, collaborative software, web-based search and semantic retrievalquery by humming,acoustic fingerprinting
  • Music analysis and knowledge representation — automatic summarization, citing, excerpting, downgrading, transformation, formal models of music, digital scores and representations, music indexing and metadata.

Other issues

  • Human-computer interaction and interfaces — multi-modal interfaces, user interfaces and usability, mobile applications, user behavior
  • Music perception, cognition, affect, and emotions — music similarity metrics, syntactical parameters, semantic parameters, musical forms, structures, styles and music annotation methodologies
  • Music archives, libraries, and digital collections — music digital libraries, public access to musical archives, benchmarks and research databases
  • Intellectual property rights and music — national and international copyright issues, digital rights management, identification and traceability
  • Sociology and Economy of music — music industry and use of MIR in the production, distribution, consumption chain, user profiling, validation, user needs and expectations, evaluation of music IR systems, building test collections, experimental design and metrics

Music information retrieval的更多相关文章

  1. Information retrieval信息检索

    https://en.wikipedia.org/wiki/Information_retrieval 信息检索 (一种信息技术) 信息检索(Information Retrieval)是指信息按一定 ...

  2. Deep Learning for Information Retrieval

    最近关注了一些Deep Learning在Information Retrieval领域的应用,得益于Deep Model在对文本的表达上展现的优势(比如RNN和CNN),我相信在IR的领域引入Dee ...

  3. Information Retrieval 倒排索引 学习笔记

    一,问题描述 在Shakespeare文集(有很多文档Document)中,寻找哪个文档包含了单词“Brutus”和"Caesar",且不包含"Calpurnia&quo ...

  4. Information Retrieval II

    [Information Retrieval II] 搜索引擎分类: 1.目录式搜索引擎. 2.全文搜索引擎. 3.元搜索引擎(Meta-Search Engine). 搜索引擎的4个阶段:下载(cr ...

  5. Information Retrieval

    [Information Retrieval] 1.信息检索/获取(Information Retrieval,简称IR) 是从大规模非结构化数据(通常是文本)的集合(通常保存在计算机上)中找出满足用 ...

  6. Information retrieval (IR class1)

    1. 什么是IR? IR与数据库的区别? 答:数据库是检索结构化的数据,例如关系数据库:而信息检索是检索非结构化/半结构化的数据,例如:一系列的文本.信息检索是属于NLP(自然语言处理)里面最实用的一 ...

  7. IRGAN:A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models

    https://arxiv.org/pdf/1705.10513.pdf 论文阅读笔记: https://www.cnblogs.com/liaohuiqiang/p/9694277.html htt ...

  8. Information retrieval (IR class2)

    1.  解析文档一般要分析哪些方面? - 首先分析文档的格式,是docx,html,xml,pdf... - 其次分析文档的语言,是英语,汉语,日语,德语... - 使用的什么字符集,ASCII编码, ...

  9. information retrieval (CMU 11642)

    1. Heap's law. predict the number of new vocabulary. 参考:https://www.youtube.com/watch?v=JDp12gU-vEQ ...

随机推荐

  1. redhat之数据挖掘R语言软件及rstudio-server服务的安装

    安装时间:2015年8月25日 22:55:35 作者:luomg 软件:R.Rstudio-server 环境:redhat6.2 联系:luomgf@163.com 声明:如果你有遇到安装中的问题 ...

  2. parsley.js验证的基本引用

    前段时间看到博客有些parsley.js验证,只是对parsley.js验证框架基本的应用,对parsley.js更深层理解没有介绍和demo 比如:异步请求,扩展验证的写法,我把我学到的parsle ...

  3. php获取文件扩展名

    <?php $path = 'http://www.wstmart.net/doc.html'; $ext = getExt($path); echo $ext; // 方法1 function ...

  4. 【转载】tomcat部署web项目的3中方法

    转载自:http://blog.csdn.net/wjx85840948/article/details/6749964/ 1.直接把项目复制到Tomcat安装目录的webapps目录中,这是最简单的 ...

  5. jquery onclick 问题

    var str = ''; for(var i = 0;i<data.list.length;i++){ str += "<tr><td>" + (i ...

  6. MyBatis 中 resultMap 详解

    resultMap 是 Mybatis 最强大的元素之一,它可以将查询到的复杂数据(比如查询到几个表中数据)映射到一个结果集当中.如在实际应用中,有一个表为(用户角色表),通过查询用户表信息展示页面, ...

  7. MyBatis 的基本要素—核心对象

    MyBatis 三个基本要素   ➢ 核心接口和类 ➢ MyBatis 核心配置文件(mybatis-config.xml) ➢ SQL 映射文件(mapper.xml) MyBatis 核心接口和类 ...

  8. Python爬虫入门教程: 27270图片爬取

    今天继续爬取一个网站,http://www.27270.com/ent/meinvtupian/ 这个网站具备反爬,so我们下载的代码有些地方处理的也不是很到位,大家重点学习思路,有啥建议可以在评论的 ...

  9. 第十三节:web爬虫之Redis数据存储

    下面仅仅展示Redis的set()集合存储,并不完整,后期会对Redis进行全面的介绍.... 此时数据已经存储到Redis当中

  10. Django-Rest framework中文翻译-generic-views

    通用视图 Django的通用视图......被开发为常见用法模式的快捷方式......它们采用视图开发中的某些常见习语和模式并对其进行抽象,以便您可以快速编写数据的常用视图,而无需重复自己. - Dj ...