×

快速神经网络无损压缩方法研究

消耗积分:0 | 格式:rar | 大小:333 | 2008-12-31

分享资料个

无损数据编码领域应用较少。针对这种现状,该文详细地研究了最大熵统计模型和神经网络算法各自的特点,提出了一种基于最大熵原理的神经网络概率预测模型并结合自适应算术编码来进行数据压缩,具有精简的网络结构的自适应在线学习算法。试验表明,该算法在压缩率上可以优于目前流行的压缩算法Limpel-Zip(zip,gzip),并且在运行时间和所需空间性能上同PPM和Burrows Wheeler算法相比也是颇具竞争力的。该算法实现为多输入和单输出的两层神经网络,用已编码比特的学习结果作为待编码比特的工作参数,符合数据上下文相关约束的特点,提高了预测精度,并节约了编码时间。
关 键 词 算术编码; 数据压缩; 最大熵; 神经网络
Lossless Data Compression with Neural Network Based on Maximum Entropy Theory
FU Yan,ZHOU Jun-lin,WU Yue
Neural networks are used more frequently in lossy data coding domains such as audio, image, etc than in general lossless data coding, because standard neural networks must be trained off-line and they are too slow to be practical. In this paper, an adaptive arithmetic coding algorithm based on maximum entropy and neural networks are proposed for data compression. This adaptive algorithm with simply structure can do on-line learning and does not need to be trained off-line. The experiments show that this algorithm surpasses those traditional coding method, such as Limper-Ziv compressors (zip, gzip), in compressing rate and is competitive in speed and time with those traditional coding method such as PPM and Burrows-Wheeler algorithms. The compressor is a bit-level predictive arithmetic which using a 2 layer network with muti-input and one output. The arithmetic, according with the context constriction, improves the precision of prediction and reduces the coding time.
Key words arithmetic encoding; data compression; maximum entropy; neural network

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉

评论(0)
发评论

下载排行榜

全部0条评论

快来发表一下你的评论吧 !