Description

An entropy encoder is a data encoding method that achieves lossless data compression by encoding a message with "wasted" or "extra" information removed. In other words, entropy encoding removes information that was not necessary in the first place to accurately encode the message. A high degree of entropy implies a message with a great deal of wasted information; english text encoded in ASCII is an example of a message type that has very high entropy. Already compressed messages, such as JPEG graphics or ZIP archives, have very little entropy and do not benefit from further attempts at entropy encoding. 

English text encoded in ASCII has a high degree of entropy because all characters are encoded using the same number of bits, eight. It is a known fact that the letters E, L, N, R, S and T occur at a considerably higher frequency than do most other letters in english text. If a way could be found to encode just these letters with four bits, then the new encoding would be smaller, would contain all the original information, and would have less entropy. ASCII uses a fixed number of bits for a reason, however: it’s easy, since one is always dealing with a fixed number of bits to represent each possible glyph or character. How would an encoding scheme that used four bits for the above letters be able to distinguish between the four-bit codes and eight-bit codes? This seemingly difficult problem is solved using what is known as a "prefix-free variable-length" encoding. 

In such an encoding, any number of bits can be used to represent any glyph, and glyphs not present in the message are simply not encoded. However, in order to be able to recover the information, no bit pattern that encodes a glyph is allowed to be the prefix of any other encoding bit pattern. This allows the encoded bitstream to be read bit by bit, and whenever a set of bits is encountered that represents a glyph, that glyph can be decoded. If the prefix-free constraint was not enforced, then such a decoding would be impossible. 

Consider the text "AAAAABCD". Using ASCII, encoding this would require 64 bits. If, instead, we encode "A" with the bit pattern "00", "B" with "01", "C" with "10", and "D" with "11" then we can encode this text in only 16 bits; the resulting bit pattern would be "0000000000011011". This is still a fixed-length encoding, however; we’re using two bits per glyph instead of eight. Since the glyph "A" occurs with greater frequency, could we do better by encoding it with fewer bits? In fact we can, but in order to maintain a prefix-free encoding, some of the other bit patterns will become longer than two bits. An optimal encoding is to encode "A" with "0", "B" with "10", "C" with "110", and "D" with "111". (This is clearly not the only optimal encoding, as it is obvious that the encodings for B, C and D could be interchanged freely for any given encoding without increasing the size of the final encoded message.) Using this encoding, the message encodes in only 13 bits to "0000010110111", a compression ratio of 4.9 to 1 (that is, each bit in the final encoded message represents as much information as did 4.9 bits in the original encoding). Read through this bit pattern from left to right and you’ll see that the prefix-free encoding makes it simple to decode this into the original text even though the codes have varying bit lengths. 

As a second example, consider the text "THE CAT IN THE HAT". In this text, the letter "T" and the space character both occur with the highest frequency, so they will clearly have the shortest encoding bit patterns in an optimal encoding. The letters "C", "I’ and "N" only occur once, however, so they will have the longest codes. 

There are many possible sets of prefix-free variable-length bit patterns that would yield the optimal encoding, that is, that would allow the text to be encoded in the fewest number of bits. One such optimal encoding is to encode spaces with "00", "A" with "100", "C" with "1110", "E" with "1111", "H" with "110", "I" with "1010", "N" with "1011" and "T" with "01". The optimal encoding therefore requires only 51 bits compared to the 144 that would be necessary to encode the message with 8-bit ASCII encoding, a compression ratio of 2.8 to 1. 

Input

The input file will contain a list of text strings, one per line. The text strings will consist only of uppercase alphanumeric characters and underscores (which are used in place of spaces). The end of the input will be signalled by a line containing only the word “END” as the text string. This line should not be processed.

Output

For each text string in the input, output the length in bits of the 8-bit ASCII encoding, the length in bits of an optimal prefix-free variable-length encoding, and the compression ratio accurate to one decimal point.

Sample Input

AAAAABCD
THE_CAT_IN_THE_HAT
END

Sample Output

64 13 4.9
144 51 2.8
/*
赫夫曼树的应用:
统计一个字符串中不同字符的数量,分别用ASCII、赫夫曼树进行编码。比较两者所占空间的大小。
这里举例统一由大写字母和下划线组成。
输入:
AAAAABCD
THE_CAT_IN_THE_HAT
END ------结束字符串
输出:
64 13 4.9
144 51 2.8
*/
#include<stdio.h>
#include<string.h>
# define N 28
typedef struct{
char ch; //存放字符
int num; //存放字符出现次数
int F,L,R; //父节点 左子节点 右子节点
int sum; //赫夫曼树表示这个字符的编码长度
char code[20];
}Hcode;
Hcode HT[N*2];
//统计字符及字符出现次数
int Order(Hcode HT[],char *s)
{
int i,j,x,l=0,flag;
i=strlen(s); //字符串长度
for(j=0;j<i;j++){
flag = 0; // flag作为标识符
for(x=1;x<=l;x++){
if(s[j]==HT[x].ch){
flag=x; //如果这到字符与数组中的某一个相同 ,用标识符记下位置 并跳出循环
break;
}else{
flag=0; // 如果这到字符与数组中的都不相同,,标识符为 0
}
}
if(flag==0){ //标识符为 0,表示 这个字符没有保存过,这时就要把这个字符存入数组
l++;
HT[l].ch=s[j];
HT[l].num=1;
HT[l].F=0;
HT[l].sum=0;
}else{ // 标识符不为 0,表示这个字符保存过, 这是要吧数组中这个字符的数量加 1
HT[flag].num++;
}
}
/*for(i=1;i<=l;i++){ //这串代码可以用来测试上面的处理结果的正误
printf("\n===%c======%d\n",HT[i].ch,HT[i].num);
}*/
return l;
}
//用赫夫曼树对字符进行编码
void Haffman(Hcode HT[],int l){
int i,m,x,y,z,p,min1,min2;
m=2*l-1;
for(i=l+1;i<=m;i++){
p=i-1;
min1=9999;min2=9999;
for(x=1;x<=p;x++){
// printf("==%d===%c===%d\n",HT[x].num,HT[x].ch,min1);
if((HT[x].F==0)&&(HT[x].num<=min1)){
min1=HT[x].num;
y=x;
}
}
// printf("==%d===%c===\n",HT[y].num,HT[y].ch);
for(x=1;x<=p;x++){
// printf("==%d===%c===%d\n",HT[x].num,HT[x].ch,min2);
if((x!=y)&&(HT[x].F==0)&&(HT[x].num<=min2)){
min2=HT[x].num;
z=x;
}
}
//上面的两个for循环,是获取两个权值最小的位置。然后下面 为这两个位置创建父节点,同时保存父子节点的关系
// printf("==%d===%c===\n",HT[z].num,HT[z].ch);
HT[i].num=HT[y].num + HT[z].num;
HT[i].F=0; HT[i].sum=0;
HT[i].L=y; HT[i].R=z;
HT[y].F=i; HT[z].F=i; // printf("==%d===%d=====%d===\n",HT[i].num,HT[i].L,HT[i].R);
}
}
//编码
void Tree(Hcode HT[],int l)
{
int i,x,f,a,b;
char str[20];
for(i=1;i<=l;i++){
a=0;
//对这位字符进行编码
for(x=i,f=HT[x].F;f!=0;x=f,f=HT[x].F){
if(HT[f].L==x) str[a++]='1';
else str[a++]= '0';
}
b=0;
a--;
//把编码传给这位字符的编码位置
while(a>=0){
HT[i].code[b++]=str[a--];
str[a+1]='\0';
}
printf("==%s==%c==\n",HT[i].code,HT[i].ch);
}
}
int main()
{
char str[1000];
int l,i,all,max,x,f;
float a,c,b;
char str1[4]={"END"};
scanf("%s",str);
while(strcmp(str,str1)!=0){ //结束标志
all=0;
for(i=1;i<=N*2;i++) HT[i].ch=' ';//每次处理都要对全局变量数组进行初始化
l=Order(HT,str);
max=strlen(str)*8;
if(l==1) HT[l].sum=1; //只有一种字符时,要注意这种情况要特殊处理。
else{
Haffman(HT,l); // 用赫夫曼树对字符进行编码
Tree(HT,l); //输入每位字符对应的编码
}
for(i=1;i<=l;i++){
//测出所需编码位数
for(x=i,f=HT[x].F;f!=0;x=f,f=HT[x].F){
HT[i].sum++;
}
all=all+HT[i].num*HT[i].sum;
}
a=max; c=all;
b=a/c;
printf("%d %d %.1f\n",max,all,b); scanf("%s",str);
}
return 0;
}

  

puk1521 赫夫曼树编码的更多相关文章

  1. 【算法】赫夫曼树(Huffman)的构建和应用(编码、译码)

    参考资料 <算法(java)>                           — — Robert Sedgewick, Kevin Wayne <数据结构>       ...

  2. C++哈夫曼树编码和译码的实现

    一.背景介绍: 给定n个权值作为n个叶子结点,构造一棵二叉树,若带权路径长度达到最小,称这样的二叉树为最优二叉树,也称为哈夫曼树(Huffman Tree).哈夫曼树是带权路径长度最短的树,权值较大的 ...

  3. 【数据结构】赫夫曼树的实现和模拟压缩(C++)

    赫夫曼(Huffman)树,由发明它的人物命名,又称最优树,是一类带权路径最短的二叉树,主要用于数据压缩传输. 赫夫曼树的构造过程相对比较简单,要理解赫夫曼数,要先了解赫夫曼编码. 对一组出现频率不同 ...

  4. Android版数据结构与算法(七):赫夫曼树

    版权声明:本文出自汪磊的博客,未经作者允许禁止转载. 近期忙着新版本的开发,此外正在回顾C语言,大部分时间没放在数据结构与算法的整理上,所以更新有点慢了,不过既然写了就肯定尽力将这部分完全整理好分享出 ...

  5. 赫夫曼树JAVA实现及分析

    一,介绍 1)构造赫夫曼树的算法是一个贪心算法,贪心的地方在于:总是选取当前频率(权值)最低的两个结点来进行合并,构造新结点. 2)使用最小堆来选取频率最小的节点,有助于提高算法效率,因为要选频率最低 ...

  6. Java数据结构和算法(四)赫夫曼树

    Java数据结构和算法(四)赫夫曼树 数据结构与算法目录(https://www.cnblogs.com/binarylei/p/10115867.html) 赫夫曼树又称为最优二叉树,赫夫曼树的一个 ...

  7. javascript实现数据结构: 树和二叉树的应用--最优二叉树(赫夫曼树),回溯法与树的遍历--求集合幂集及八皇后问题

    赫夫曼树及其应用 赫夫曼(Huffman)树又称最优树,是一类带权路径长度最短的树,有着广泛的应用. 最优二叉树(Huffman树) 1 基本概念 ① 结点路径:从树中一个结点到另一个结点的之间的分支 ...

  8. 重温经典之赫夫曼(Huffman)编码

    先看看赫夫曼树假设有n个权值{w1,w2,…,wn},构造一个有n个叶子结点的二叉树,每个叶子结点权值为wi,则其中带权路径长度WPL最小的二叉树称作赫夫曼树或最优二叉树. 赫夫曼树的构造,赫夫曼最早 ...

  9. C#数据结构-赫夫曼树

    什么是赫夫曼树? 赫夫曼树(Huffman Tree)是指给定N个权值作为N个叶子结点,构造一棵二叉树,若该树的带权路径长度达到最小.哈夫曼树(也称为最优二叉树)是带权路径长度最短的树,权值较大的结点 ...

随机推荐

  1. windows提权之mimikatz

    mimikatz privilege::debug #提权命令 sekurlsa::logonPasswords #抓取密码 winmine::infos #扫雷作弊 lsadump::lsa /pa ...

  2. 搭建Linux服务器

    工欲善其事必先利其器, 虚拟机:百度云链接地址:https://pan.baidu.com/s/1_nWQh3WKF7xLs5-nmbZ8lA   (Vmware 12 ) Linux 7:百度云链接 ...

  3. vs code的使用与常用插件和技巧大全总结

    vs code的使用与常用插件和技巧大全总结 Author:3# 一个专注于web技术的80后 我不用拼过聪明人,我只需要拼过那些懒人 我就一定会超越大部分人! CSDN@ 极客小俊,CSDN官方首发 ...

  4. Spring Cloud实战 | 最终篇:Spring Cloud Gateway+Spring Security OAuth2集成统一认证授权平台下实现注销使JWT失效方案

    一. 前言 在上一篇文章介绍 youlai-mall 项目中,通过整合Spring Cloud Gateway.Spring Security OAuth2.JWT等技术实现了微服务下统一认证授权平台 ...

  5. 过万 star 高星项目的秘密——GitHub 热点速览 Vol.39

    作者:HelloGitHub-小鱼干 虽然国外十一并不过国庆,但是本周的 GitHub 也稍显疲软,GitHub 周榜的获 star 超过 1k 的项目寥寥无几,本周新开源的项目更是屈指可数.用 C ...

  6. Python-对字典进行排序

    案例: 某班英语成绩以字典的形式存储为: {'lili':78, 'jin':50, 'liming': 30, ......} 依据成绩高低,进行学生成绩排名 如何对字典排序? 方法1: #!/us ...

  7. c#数据处理总结(分组、交并差与递归)

    前言:最近项目比较忙,完全没有时间写下总结笔记,今天抽出时间来写下笔记,供写后台的你来做数据处理后台代码编写的参考. 一.分组 var GroupForList = numberList.GroupB ...

  8. Python练习题 030:Project Euler 002:偶数斐波那契数之和

    本题来自 Project Euler 第2题:https://projecteuler.net/problem=2 # Each new term in the Fibonacci sequence ...

  9. 001 发大招了 神奇的效率工具--Java代码转python代码

    今天发现一个好玩的工具: 可以直接将java转成python 1. 安装工具(windows 环境下面) 先下载antlr: 下载链接如下: http://www.antlr3.org/downloa ...

  10. matlab中fspecial Create predefined 2-D filter以及中值滤波均值滤波以及高斯滤波

    来源: 1.https://ww2.mathworks.cn/help/images/ref/fspecial.html?searchHighlight=fspecial&s_tid=doc_ ...