[1] proposed to use cs on stream video by sample several frame together or independently.... But it didn't consider the interframe redundancy.
[2] was focus on increasing the resolution of digital video, thus little work was done for video coding/compression.
[3] [4] proposed compressed video sensing in 2008.
[3] used a hybrid way to compress video. The main contribution I think was only the scheme it proposed: transmit both conventionally encoding(low resolution) and cs encoding(high resolution) video stream, recon. on demand (if coarse-scale -> conventionally decoding, if fine-scale, cs decoding).
Compared to [3], I think [4] is much important for CVS. The way it employed is classifying the blocks of a frame to dense and sparse via a cs testing. Dense blocks use conventional encoding, while sparse blocks use cs. The cs testing for a block of frame is another contribution should be noticed.
In 2009, most work were focus on distributed CVS based on the notion of Distributed Video Coding(DVC). [5] use reconstructed key frame to find sparse basis for cs frame, and it proposed L1, SKIP, SINGLE modes for cs frames. The codec is quite similar with pixel-domain dvc. [6] also use reconstructed key frame to generate side information. But its side information is not the sparse basis, but a prediction. Furthermore, [6] use both frame-based and block-based encoding for cs frames. It is quite novel. But I think although it improves the performance, but a little redundant. Different with [5][6], [7] use cs for both key frame and non-key frame. And it proposed the modified GPSR for DCVS. Furthermore, there are relatively complete review of techniques like cs, dvc, dcs, etc., which I think is quite useful for beginners in this area.
[8] proposed a very interesting multiscale framework. It employs LIMAT[11] framework to exploit motion information and remove temporal redundancies. And it use iterative multiscale framework: reconstructing successively finer resolution approximation to each frame using motion vectors estimated at coarser scales, and alternatively using these approximation to estimate the motion. The multiscale framework essentially exploit the feature of wavelet transformation (coarse scale and fine scale).
[10] is published in 2011. It designs the cross-layer system for video transmission using compressed sensing.
The cross-layer system jointly controls video encoding rate, transmission rate, channel coding rate. It is useful for researchers who focus on network design of a compressive sensing application.
[9] is not about CVS, but I think it's very important to know current video compression techniques. It introduced the video compression techniques like H.26x, MPEG, etc. It's a very good introduction and review work.
Another thing should be mentioned is that, Distributed Compressed Video Coding in [5] [6] both used the notion: the sparsest representation of a frame is a combination of neighbor blocks of a block.
[1] Compressive imaging for video representation and coding
[2] Compressive coded aperture video reconstruction
[3] Compressed video sensing
[4] Compressive Video Sampling
[5] Distributed video coding using compressive sampling
[6] Distributed compressed video sensing
[7] Distributed compressive video sensing
[8] A multiscale framework for compressive sensing of video
[9] Video Compression Techniques: An Overview
[10] Compressed-Sensing-Enabled Video Streaming for Wireless Multimedia Sensor Networks
[11] Lifting-based invertible motion adaptive transform framework for highly scalable video compression
No comments:
Post a Comment