*HDU1053 哈夫曼编码
Entropy
Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)
Total Submission(s): 5972 Accepted Submission(s): 2507
entropy encoder is a data encoding method that achieves lossless data
compression by encoding a message with “wasted” or “extra” information
removed. In other words, entropy encoding removes information that was
not necessary in the first place to accurately encode the message. A
high degree of entropy implies a message with a great deal of wasted
information; english text encoded in ASCII is an example of a message
type that has very high entropy. Already compressed messages, such as
JPEG graphics or ZIP archives, have very little entropy and do not
benefit from further attempts at entropy encoding.
English text
encoded in ASCII has a high degree of entropy because all characters are
encoded using the same number of bits, eight. It is a known fact that
the letters E, L, N, R, S and T occur at a considerably higher frequency
than do most other letters in english text. If a way could be found to
encode just these letters with four bits, then the new encoding would be
smaller, would contain all the original information, and would have
less entropy. ASCII uses a fixed number of bits for a reason, however:
it’s easy, since one is always dealing with a fixed number of bits to
represent each possible glyph or character. How would an encoding scheme
that used four bits for the above letters be able to distinguish
between the four-bit codes and eight-bit codes? This seemingly difficult
problem is solved using what is known as a “prefix-free
variable-length” encoding.
In such an encoding, any number of
bits can be used to represent any glyph, and glyphs not present in the
message are simply not encoded. However, in order to be able to recover
the information, no bit pattern that encodes a glyph is allowed to be
the prefix of any other encoding bit pattern. This allows the encoded
bitstream to be read bit by bit, and whenever a set of bits is
encountered that represents a glyph, that glyph can be decoded. If the
prefix-free constraint was not enforced, then such a decoding would be
impossible.
Consider the text “AAAAABCD”. Using ASCII, encoding
this would require 64 bits. If, instead, we encode “A” with the bit
pattern “00”, “B” with “01”, “C” with “10”, and “D” with “11” then we
can encode this text in only 16 bits; the resulting bit pattern would be
“0000000000011011”. This is still a fixed-length encoding, however;
we’re using two bits per glyph instead of eight. Since the glyph “A”
occurs with greater frequency, could we do better by encoding it with
fewer bits? In fact we can, but in order to maintain a prefix-free
encoding, some of the other bit patterns will become longer than two
bits. An optimal encoding is to encode “A” with “0”, “B” with “10”, “C”
with “110”, and “D” with “111”. (This is clearly not the only optimal
encoding, as it is obvious that the encodings for B, C and D could be
interchanged freely for any given encoding without increasing the size
of the final encoded message.) Using this encoding, the message encodes
in only 13 bits to “0000010110111”, a compression ratio of 4.9 to 1
(that is, each bit in the final encoded message represents as much
information as did 4.9 bits in the original encoding). Read through this
bit pattern from left to right and you’ll see that the prefix-free
encoding makes it simple to decode this into the original text even
though the codes have varying bit lengths.
As a second example,
consider the text “THE CAT IN THE HAT”. In this text, the letter “T” and
the space character both occur with the highest frequency, so they will
clearly have the shortest encoding bit patterns in an optimal encoding.
The letters “C”, “I’ and “N” only occur once, however, so they will
have the longest codes.
There are many possible sets of
prefix-free variable-length bit patterns that would yield the optimal
encoding, that is, that would allow the text to be encoded in the fewest
number of bits. One such optimal encoding is to encode spaces with
“00”, “A” with “100”, “C” with “1110”, “E” with “1111”, “H” with “110”,
“I” with “1010”, “N” with “1011” and “T” with “01”. The optimal encoding
therefore requires only 51 bits compared to the 144 that would be
necessary to encode the message with 8-bit ASCII encoding, a compression
ratio of 2.8 to 1.
input file will contain a list of text strings, one per line. The text
strings will consist only of uppercase alphanumeric characters and
underscores (which are used in place of spaces). The end of the input
will be signalled by a line containing only the word “END” as the text
string. This line should not be processed.
each text string in the input, output the length in bits of the 8-bit
ASCII encoding, the length in bits of an optimal prefix-free
variable-length encoding, and the compression ratio accurate to one
decimal point.
END
//搞不懂。。。算出每个字符出现的次数用优先队列从小到大存节点,每次取队列中两个最小的加起来再存入队列至队列中只有一个节点。
#include<iostream>
#include<cstdio>
#include<cstring>
#include<queue>
#include<functional>
#include<vector>
using namespace std;
int a[];
char s[];
int ans;
int main()
{
while(scanf("%s",s))
{
if(!strcmp(s,"END"))
break;
priority_queue<int,vector<int>,greater<int> >q;
int len=strlen(s);
memset(a,,sizeof(a));
for(int i=;i<len;i++)
{
if(s[i]=='_')
a[]++;
else a[s[i]-'A'+]++;
}
for(int i=;i<=;i++)
if(a[i]!=)
q.push(a[i]);
if(q.size()==)
ans=len;
else
{
ans=;
while(q.size()!=)
{
int x=q.top();
q.pop();
int y=q.top();
q.pop();
ans=ans+x+y;
q.push(x+y);
}
}
printf("%d %d %.1lf\n",*len,ans,(double)*len/(double)ans);
}
return ;
}
*HDU1053 哈夫曼编码的更多相关文章
- 哈夫曼(huffman)树和哈夫曼编码
哈夫曼树 哈夫曼树也叫最优二叉树(哈夫曼树) 问题:什么是哈夫曼树? 例:将学生的百分制成绩转换为五分制成绩:≥90 分: A,80-89分: B,70-79分: C,60-69分: D,<60 ...
- (转载)哈夫曼编码(Huffman)
转载自:click here 1.哈夫曼编码的起源: 哈夫曼编码是 1952 年由 David A. Huffman 提出的一种无损数据压缩的编码算法.哈夫曼编码先统计出每种字母在字符串里出现的频率, ...
- 数据结构图文解析之:哈夫曼树与哈夫曼编码详解及C++模板实现
0. 数据结构图文解析系列 数据结构系列文章 数据结构图文解析之:数组.单链表.双链表介绍及C++模板实现 数据结构图文解析之:栈的简介及C++模板实现 数据结构图文解析之:队列详解与C++模板实现 ...
- HDU2527 哈夫曼编码
Safe Or Unsafe Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)To ...
- YTU 3027: 哈夫曼编码
原文链接:https://www.dreamwings.cn/ytu3027/2899.html 3027: 哈夫曼编码 时间限制: 1 Sec 内存限制: 128 MB 提交: 2 解决: 2 ...
- 使用F#来实现哈夫曼编码吧
最近算法课要求实现哈夫曼编码,由于前面的问题都是使用了F#来解决,偶然换成C#也十分古怪,报告也不好看,风格差太多.一开始是打算把C#版本的哈夫曼编码换用F#来写,结果写到一半就觉得日了狗了...毕竟 ...
- 赫夫曼\哈夫曼\霍夫曼编码 (Huffman Tree)
哈夫曼树 给定n个权值作为n的叶子结点,构造一棵二叉树,若带权路径长度达到最小,称这样的二叉树为最优二叉树,也称为哈夫曼树(Huffman Tree).哈夫曼树是带权路径长度最短的树,权值较大的结点离 ...
- hdu2527哈夫曼编码
/* Safe Or Unsafe Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) T ...
- [数据结构与算法]哈夫曼(Huffman)树与哈夫曼编码
声明:原创作品,转载时请注明文章来自SAP师太技术博客( 博/客/园www.cnblogs.com):www.cnblogs.com/jiangzhengjun,并以超链接形式标明文章原始出处,否则将 ...
随机推荐
- codevs3250 操作序列
题目描述 Description Petya是一个非常好玩孩子.他很无聊,因此他开始玩下面的游戏: 他得到一个长度为N的整数序列,他会对这些数字进行操作,他可以把某个数的数值加1或者减1(当然他可以对 ...
- unity3D游戏-WorldFight
计划写一个2D策略类的游戏,玩法类似炉石传说,以收集卡牌为主,不同的地方在于战斗方式类似棋类游戏,而且还有一个技能系统作为补充. ---更新(2015.7.13) v2.0.1更新: 添加了基本AI ...
- 【Alpha版本】冲刺总结随笔
项目预期计划 确定代码规范与编码原则. 根据原型设计,界面设计,搭建应用大致框架,完善控件,背景等的界面设计. 根据体系结构设计,完善界面跳转逻辑,确定功能模块,实现1.0版本功能. 重点完善需求说明 ...
- C++ 异常机制
程序在运行的时候可能产生各种可预料到的异常,例如磁盘不足,内存不足,或是数学运算溢出,数组越界之类的.为了解决这些问题,C++提供了异常处理机制,它一般是由try语句和catch语句构成. 一.try ...
- Mac键盘图标与对应快捷按键标志汇总
Mac键盘图标与对应快捷按键 ⌘--Command () win键 ⌃ --Control ctrl键 ⌥--Option (alt) ⇧--Shift ⇪--Caps Lock fn--功能键就是 ...
- erlang 200w进程测试
参照<programing erlang>书例子,测试erlang创建进程性能 创建N个进程 都wait,然后挨个发送die关闭进程,代码如下: 测试场景:200w进程.MacBook P ...
- mongodb的查询语句学习摘要
看了些资料,对应只需要知道怎么查询和使用mongodb的我来说,这些足够啦. 左边是mongodb查询语句,右边是sql语句.对照着用,挺方便. db.users.find() select * fr ...
- JavaScript和JQuery好书推荐
其实无论你是php/python/java还是c/c++,只会自己那点知识是无法独立完成一个站点的建设的! 如果你因自己能力不足拒绝过几次亲友的建站请求,或者因合作中不了解前端是什么东西而失去过几次创 ...
- linux 汇编
nasm的语法和大学教材上8086的汇编伪指令有些差别,指令都是一样的. 编辑器就是普通的编辑器,vim,emacs,gedit,kate源文件类型为ascii码的plain text 编译用gcc或 ...
- mac tomcat https
一.HTTPS的基本工作原理: HTTPS在传输数据之前需要客户端(浏览器)与服务端(网站)之间进行一次握手,在握手过程中将确立双方加密传输数据的密码信息.TLS/SSL协议不仅仅是一套加密传输的协议 ...