*HDU1053 哈夫曼编码
Entropy
Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)
Total Submission(s): 5972 Accepted Submission(s): 2507
entropy encoder is a data encoding method that achieves lossless data
compression by encoding a message with “wasted” or “extra” information
removed. In other words, entropy encoding removes information that was
not necessary in the first place to accurately encode the message. A
high degree of entropy implies a message with a great deal of wasted
information; english text encoded in ASCII is an example of a message
type that has very high entropy. Already compressed messages, such as
JPEG graphics or ZIP archives, have very little entropy and do not
benefit from further attempts at entropy encoding.
English text
encoded in ASCII has a high degree of entropy because all characters are
encoded using the same number of bits, eight. It is a known fact that
the letters E, L, N, R, S and T occur at a considerably higher frequency
than do most other letters in english text. If a way could be found to
encode just these letters with four bits, then the new encoding would be
smaller, would contain all the original information, and would have
less entropy. ASCII uses a fixed number of bits for a reason, however:
it’s easy, since one is always dealing with a fixed number of bits to
represent each possible glyph or character. How would an encoding scheme
that used four bits for the above letters be able to distinguish
between the four-bit codes and eight-bit codes? This seemingly difficult
problem is solved using what is known as a “prefix-free
variable-length” encoding.
In such an encoding, any number of
bits can be used to represent any glyph, and glyphs not present in the
message are simply not encoded. However, in order to be able to recover
the information, no bit pattern that encodes a glyph is allowed to be
the prefix of any other encoding bit pattern. This allows the encoded
bitstream to be read bit by bit, and whenever a set of bits is
encountered that represents a glyph, that glyph can be decoded. If the
prefix-free constraint was not enforced, then such a decoding would be
impossible.
Consider the text “AAAAABCD”. Using ASCII, encoding
this would require 64 bits. If, instead, we encode “A” with the bit
pattern “00”, “B” with “01”, “C” with “10”, and “D” with “11” then we
can encode this text in only 16 bits; the resulting bit pattern would be
“0000000000011011”. This is still a fixed-length encoding, however;
we’re using two bits per glyph instead of eight. Since the glyph “A”
occurs with greater frequency, could we do better by encoding it with
fewer bits? In fact we can, but in order to maintain a prefix-free
encoding, some of the other bit patterns will become longer than two
bits. An optimal encoding is to encode “A” with “0”, “B” with “10”, “C”
with “110”, and “D” with “111”. (This is clearly not the only optimal
encoding, as it is obvious that the encodings for B, C and D could be
interchanged freely for any given encoding without increasing the size
of the final encoded message.) Using this encoding, the message encodes
in only 13 bits to “0000010110111”, a compression ratio of 4.9 to 1
(that is, each bit in the final encoded message represents as much
information as did 4.9 bits in the original encoding). Read through this
bit pattern from left to right and you’ll see that the prefix-free
encoding makes it simple to decode this into the original text even
though the codes have varying bit lengths.
As a second example,
consider the text “THE CAT IN THE HAT”. In this text, the letter “T” and
the space character both occur with the highest frequency, so they will
clearly have the shortest encoding bit patterns in an optimal encoding.
The letters “C”, “I’ and “N” only occur once, however, so they will
have the longest codes.
There are many possible sets of
prefix-free variable-length bit patterns that would yield the optimal
encoding, that is, that would allow the text to be encoded in the fewest
number of bits. One such optimal encoding is to encode spaces with
“00”, “A” with “100”, “C” with “1110”, “E” with “1111”, “H” with “110”,
“I” with “1010”, “N” with “1011” and “T” with “01”. The optimal encoding
therefore requires only 51 bits compared to the 144 that would be
necessary to encode the message with 8-bit ASCII encoding, a compression
ratio of 2.8 to 1.
input file will contain a list of text strings, one per line. The text
strings will consist only of uppercase alphanumeric characters and
underscores (which are used in place of spaces). The end of the input
will be signalled by a line containing only the word “END” as the text
string. This line should not be processed.
each text string in the input, output the length in bits of the 8-bit
ASCII encoding, the length in bits of an optimal prefix-free
variable-length encoding, and the compression ratio accurate to one
decimal point.
END
//搞不懂。。。算出每个字符出现的次数用优先队列从小到大存节点,每次取队列中两个最小的加起来再存入队列至队列中只有一个节点。
#include<iostream>
#include<cstdio>
#include<cstring>
#include<queue>
#include<functional>
#include<vector>
using namespace std;
int a[];
char s[];
int ans;
int main()
{
while(scanf("%s",s))
{
if(!strcmp(s,"END"))
break;
priority_queue<int,vector<int>,greater<int> >q;
int len=strlen(s);
memset(a,,sizeof(a));
for(int i=;i<len;i++)
{
if(s[i]=='_')
a[]++;
else a[s[i]-'A'+]++;
}
for(int i=;i<=;i++)
if(a[i]!=)
q.push(a[i]);
if(q.size()==)
ans=len;
else
{
ans=;
while(q.size()!=)
{
int x=q.top();
q.pop();
int y=q.top();
q.pop();
ans=ans+x+y;
q.push(x+y);
}
}
printf("%d %d %.1lf\n",*len,ans,(double)*len/(double)ans);
}
return ;
}
*HDU1053 哈夫曼编码的更多相关文章
- 哈夫曼(huffman)树和哈夫曼编码
哈夫曼树 哈夫曼树也叫最优二叉树(哈夫曼树) 问题:什么是哈夫曼树? 例:将学生的百分制成绩转换为五分制成绩:≥90 分: A,80-89分: B,70-79分: C,60-69分: D,<60 ...
- (转载)哈夫曼编码(Huffman)
转载自:click here 1.哈夫曼编码的起源: 哈夫曼编码是 1952 年由 David A. Huffman 提出的一种无损数据压缩的编码算法.哈夫曼编码先统计出每种字母在字符串里出现的频率, ...
- 数据结构图文解析之:哈夫曼树与哈夫曼编码详解及C++模板实现
0. 数据结构图文解析系列 数据结构系列文章 数据结构图文解析之:数组.单链表.双链表介绍及C++模板实现 数据结构图文解析之:栈的简介及C++模板实现 数据结构图文解析之:队列详解与C++模板实现 ...
- HDU2527 哈夫曼编码
Safe Or Unsafe Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)To ...
- YTU 3027: 哈夫曼编码
原文链接:https://www.dreamwings.cn/ytu3027/2899.html 3027: 哈夫曼编码 时间限制: 1 Sec 内存限制: 128 MB 提交: 2 解决: 2 ...
- 使用F#来实现哈夫曼编码吧
最近算法课要求实现哈夫曼编码,由于前面的问题都是使用了F#来解决,偶然换成C#也十分古怪,报告也不好看,风格差太多.一开始是打算把C#版本的哈夫曼编码换用F#来写,结果写到一半就觉得日了狗了...毕竟 ...
- 赫夫曼\哈夫曼\霍夫曼编码 (Huffman Tree)
哈夫曼树 给定n个权值作为n的叶子结点,构造一棵二叉树,若带权路径长度达到最小,称这样的二叉树为最优二叉树,也称为哈夫曼树(Huffman Tree).哈夫曼树是带权路径长度最短的树,权值较大的结点离 ...
- hdu2527哈夫曼编码
/* Safe Or Unsafe Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) T ...
- [数据结构与算法]哈夫曼(Huffman)树与哈夫曼编码
声明:原创作品,转载时请注明文章来自SAP师太技术博客( 博/客/园www.cnblogs.com):www.cnblogs.com/jiangzhengjun,并以超链接形式标明文章原始出处,否则将 ...
随机推荐
- thinkphp3.2.3关于模板使用之一二
1.包含文件 使用场景:比如我们在编写网页布局的时候,可能每一个网页的头和脚是相同的,此时如果给每一个网页分别设置,未免太麻烦了.此时就可以使用带包含文件. 首先检查配置文件查看我们的主题目录在哪儿, ...
- 公众平台调整SSL安全策略 不再支持SSLv2、SSLv3版本
昨天夜间,微信团队发布重要安全策略调整,将关闭掉SSLv2.SSLv3版本支持,不再支持部分使用SSLv2. SSLv3或更低版本的客户端调用.请仍在使用这些版本的开发者于11月30日前尽快修复升级. ...
- 分布式应用下的Redis单机锁设计与实现
背景 最近写了一个定时任务,期望是同一时间只有一台机器运行即可.因为是应用是在集群环境下跑的,所以需要自己实现类一个简陋的Redis单机锁. 原理 主要是使用了Redis的SET NX特性,成功设置的 ...
- 引用项目外dll时不显示注释的解决方案
在引用项目外的dll时,显示类库中的注释可按以下步骤: 方法或变量用summary添加注释,如: /// <summary>发送post请求 /// < ...
- DB2表的重组
DB2在存储大数据的时候,遇到一个问题,将数据导入表中保存不了,最后是重组后才解决. 下面是从IBM官网上搜集的资料: 官网地址:http://publib.boulder.ibm.com/infoc ...
- mplayer-1.3.0-2016-09-01.7z
鼠标右键 快速定位 左SHIFT 记录开始时间 左CTRL 记录结束时间 右CTRL 复制开始结束时间 00:00:00.000 00:00:00.000 右SHIFT 生成视频剪切命令保存到 _cu ...
- c#接口
//接口中方法 属性 事件等默认都是public 不允许用修饰符修饰 public interface IEventInterFace { string this[int index] { get; ...
- mongoosejs学习地址
http://mongoosejs.com/docs/api.html#querystream-js Node的小基友supervisor 和 forever 不要忘记了,相信你会喜欢他们的:) ht ...
- RSA加密(C语言)
/** * \file rsa.h * * \brief The RSA public-key cryptosystem * * Copyright (C) 2006-2010, Brainspark ...
- 51nod1072(wythoff 博弈)
题目链接: http://www.51nod.com/onlineJudge/questionCode.html#!problemId=1072 题意: 中文题诶~ 思路: 博弈套路是有的, 找np局 ...