Open Access
20 December 2016 Toward enhancing the distributed video coder under a multiview video codec framework
Shih-Chieh Lee, Jiann-Jone Chen, Yao-Hong Tsai, Chin-Hua Chen
Author Affiliations +
Abstract
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

1.

Introduction

Multiview video codec (MVC) design becomes popular,1 based on which wide-spread applications, such as three-dimensional (3-D) video, free-viewpoint television (FTV), and video surveillance networks, can be developed. The 3-D video provides high quality and immersed multimedia entertainment that can be experienced through various channels, including movies, TV, internet, and so on. The FTV is a MVC system that allows viewpoint switching among different viewpoints, in which the video scene is captured by the camera from a specific view angle. For video surveillance networks, the MVC can be used to monitor and detect unusual events/objects. However, the MVC requires intersensor communication, which is expensive and not feasible in some applications. The information amount and required computational loading for a MVC codec is very large, compared to those of monoview. How to efficiently process and compress multiview videos is challenging. The joint video team has been working on the MVC, which captures videos from different video cameras and encodes these signals with reference to each other to yield a single bitstream. To enhance codec performance, most MVC schemes exploit correlations between both intraview and interview frames. At the encoder, it performs block motion compensation (MC) and disparity estimation to remove correlations between images along the intraview/temporal and interview video dimension to achieve high compression efficiency. Under this MVC framework, the time complexity of encoding operations would be high for efficient compression. It cannot provide low complexity encoding for applications like wireless video sensor/surveillance networks and low-power MVC capturing devices. The coding complexity has to be shifted to the decoder to make these applications feasible.

The distributed video coder (DVC)2,3 was proposed to effectively shift coding complexity to the decoder, which can capture and encode signals from several low-power devices independently and jointly decode these signals. It can be extended to deal with multiview video signals,4,5 in which the disparity information among images of different views can be exploited for removing correlations, in additional to correlations among intraview images. The DVC2 was developed based on lossless distributed source coding, also known as the Slepian–Wolf coder (SWC)6 for lossless coding. An important aspect of the SWC is that separated encoding can theoretically achieve the same compression ratio with joint encoding as long as the correlations among data streams are exploited by a joint decoder. This SWC framework was extended to process lossy compression with side information (SI) at the decoder,7 as in the case of the Wyner–Ziv (WZ) coder. With the WZ coding algorithm, the DVC treats video compression as a channel coding problem. The input video of DVC is decomposed into odd and even sequences, in which the former is encoded as key frames (KFs) and the latter WZ frames (WZFs). The KFs are encoded with H.264/AVC8 intramode, H.264/INTRA, and the WZFs are block-transformed, quantized, and transmitted through error correction codes in a bit-plane by bit-plane approach, in which only part of the parity bits are transmitted. At the decoder, the KFs are utilized to yield the SI a noisy WZF, which is the systematic part of an error correction code that co-operates with the received parity bits to correct channel errors. Compared to current video codec, the DVC effectively shifts a considerable amount of the coding complexity from the encoder to the decoder, which can also be applied to error resilience control9 that treats the side information frame (SIF) as additional reference information, SI, to correct channel errors. Recently, a new distributed video codec based on modulo operation in the pixel domain has been proposed,10 which demonstrates lower decoding complexity.

Integrating the MVC with a multiview distributed video coding (MDVC) would allow encoding several low-power capturing and encoding devices independently and decode these signals jointly. A view-synthesis and disparity-based correlation model that exploits interview video correlation is proposed to deliver error-resilient video in a distributed multicamera system.11 One simple MDVC example with a left-, a right-, and a central-view camera is shown in Fig. 1. The left- (L) and right-view (R) videos are encoded and decoded by the traditional video codec, e.g., H.264/INTRA, to act as KFs (I frames) for the DVC decoding. The central-view video is encoded as interleaved one intra (I) and one WZF, i.e., group of picture (|GOP|)=2. At the decoder, the SI for a WZF can be estimated by exploiting the intraview and interview image correlations, respectively. The decoded KFs are utilized to jointly reconstruct the WZFs, I^2tWZs, based on inter- and intraview image correlations. These correlations are utilized by assigning weights to different estimated motion vectors (MVs) exploited based on the MDVC framework. This decoder-driven fusion method is adopted to improve the codec performances, e.g., peak signal to noise ratios (PSNRs) and time complexity. In addition, the embedded DVC makes it feasible to setup low complexity, mobile encoders for multiview video acquisition to enable low delay and real-time processing of the MDVC. The decoder can consume the shifted computational complexity by setting a high performance computer for central decoding, e.g., large buffers, disk array, and high-speed CPUs.

Fig. 1

Multiview distributed video coding with |GOP|=2.

JEI_25_6_063022_f001.png

Researches on improving MDVC SIF quality can be found by many.1214 An iterative SIF generation method uses decoded WZF to refine the SIF,12 based on which the second iteration can enhance the quality of decoded images. By performing interpolation along intra- and interview video dimensions, respectively, to yield candidate SIFs, the final SIF can be fused from these candidate SIFs with a specific reliability measurement.13 The interview interpolated candidate SIF for fusion can be enhanced by using a perspectively transformed one,15,16 which can help to fuse better final SIFs and demonstrate better coding performance, as compared to monoview DVC. Three new fusion techniques that exploit signal properties of neighboring residual frames along intra- and interview direction were proposed for robustness and improving SIF quality.17 The fusion can also adopt a support vector machine to identify a set of features for classifying pixels into either the temporal or the disparity class, by which the fusion can yield better SIF.18 It provides a good solution for fusing intra- and interview predictions. However, these fusion methods suffer from performance degradation due to low temporally predicted quality and irregular video motion. An adaptive filtering view interpolation method19,20 was proposed to minimize the difference between SIF and decoded KF, which can compensate for the intercamera mismatches and improve SIF quality. When occlusion exists between interview videos, the temporal frame interpolation is utilized to compensate for the deficiency of interview linear fusion20 to improve SIF quality. Various SI generation methods are evaluated and compared for better utilization efficiency.

By estimating motion on interpolated frames, the irregular motion artifacts can be eliminated and the SIF quality can be improved.21 One MDVC codec22 was designed to transmit a small amount of error control information to replace an untransmitted frame and the information is obtained from a low-dimensional blockwise projection of the frame, i.e., mean-based projection. The most prominent feature of this work is that it is performed as a postprocessing step after decoding and interpolating the received video, which allows easy integration with various video transmission systems.

In the conventional video codec, it usually adopts the coding structure with a GOP size larger than 15, |GOP|>15, to yield good enough rate-distortion (RD) performances. For the MDVC, the GOP size is usually set to be smaller in that, for the WZ codec to adopt longer GOP sizes, performing ME becomes difficult and less reliable such that the reconstructed SIF quality would be degraded. Previous research23 investigates the rate-distortion and complexity performance of the feedback-channel based WZ codec as a function of the GOP size and justifies that the lowest encoder complexity, e.g., |GOP|=2, yields the best RD performance, as compared with the conventional video codec. For the MDVC, the coding structure with |GOP|=2 is adopted for simplicity and efficiency. Under the MDVC framework, we proposed to process static and nonstatic image regions with different procedures. By exploiting correlations between images along inter- and intraview dimensions, the proposed weighted block-matching prediction (BMP) can yield higher SIF quality. This proposed categorized block matching prediction with fidelity weights method is abbreviated as COMPETE. At the decoder, the scale-invariant feature transform (SIFT)24 was adopted to find stable key feature points in the first decoded KF images, L^0, R^0, and I^0, which are used for matching correspondent features among interview video images to estimate the homography matrices, Hl and Hr, through a RANSAC25 algorithm. The SIFT processing time is analyzed to be proportional to image size. The Hl and Hr are estimated once at the decoder to perspectively transform side-view images to be with central view. The homography matrix can also be estimated with a regular time interval or dynamically according to scene foreground/background change. In the proposed COMPETE algorithm, image blocks are categorized into motion, no-motion, and outlier blocks, with which blocks are processed in different ways. For motion blocks, with both perspectively transformed, L^t and R^t, and reconstructed central-view images, I^t, at the decoder, the block MC procedure can then be performed between adjacent images from these transformed and central-view ones to yield MVs. By combining blocks reached by these MVs with weights proportional to block fidelity, it would generate more smooth and higher quality SIFs. For no-motion blocks, the current block is compensated by the co-located block in the previous frame. For blocks residing on the outlier, resulting from perspective transformation, temporal bidirectional MC is performed between central-view image, I^2t1 and I^2t+1. The proposed COMPETE algorithm helps to improve the SI confidence and the quality of decoded WZF, I^2tWZs, for the MDVC system. The COMPETE also effectively decreases computational load while achieving comparable PSNR performances with other SIF reconstruction methods, e.g., MVME26 and H.264/INTRA.

For rate control of the MDVC channel coding, the turbo codec is designed to let the decoder receive just enough parity bits from the encoder for signal reconstruction. The rate compatible punctured turbo (RCPT) code is adopted for the MDVC channel coding, which was initialed from unequal error protection for unstable transmission.27 An automatic repeat request (ARQ) rate control method was developed under RCPT28 to transmit fewest parity bits for successful decoding. For the turbo decoder to reference more reliable prior probabilities to reduce its iteration times and improve decoding efficiency, the correlation of DCTs between the original and its SIF is modeled as Laplacian distribution.29 Different puncture patterns were designed for direct and alternate current coefficients, DCs and ACs, to yield the parity bits, based on which the correlation between bit-planes is exploited and utilized to estimate the posteriori probability to provide the priori probability for turbo decoding. Simulations verified that the turbo decoding time can be reduced to 37% as compared to other SIF generation methods.

In what follows, SIF reconstruction methods developed based on the MDVC system and the proposed COMPETE methods are described in Sec. 2. The proposed rate control algorithm to improve the MDVC performance is described in Sec. 3. Section 4 is the simulation study. Section 5 concludes this paper.

2.

Multiview Distributed Video Coding Side Information

For one MDVC with |GOP|=2, half of central-view images are encoded as WZFs and the SIF quality at the decoder would dominate the WZ codec performance. The SIF at the decoder can be considered as a reconstructed image of the original WZF at the encoder transmitted through noise channels. If the SIF quality is high enough, fewer parity bits will be requested during decoding and higher codec efficiency can be achieved. In a monoview video codec, the general approach to yield SIF is performing temporal interpolation/extrapolation from KFs to yield SIF, and there are other approaches adopting motion compensated interpolation to improve SIF quality, such as using an optical flow predictor30 and hash-based estimator.31 For the MVC, the same scene is captured from different viewing angles by different cameras, such that the correlation among different view videos can be utilized for SIF generation. Under the MDVC framework, we proposed to utilize the SIFT24 feature extraction and the RANSAC25,32 algorithm to exploit feature correspondences among interview video images. The SIFT outperforms other feature descriptors on images with real geometric and photometric transformations,33 and the RANSAC helps to robustly fit a model to data in the presence of outliers, based on which the homography matrices34 can be estimated for perspective transform from side-view video to central view. The proposed BMP algorithm can then be carried out to yield high quality SIF and improve the quality of decoded WZF. Different SIF reconstruction methods developed based on the MDVC framework, such as motion compensated temporal interpolation (MCTI),35 MVME,26 and hybrid-MVME (H-MVME), will be first reviewed for performance comparisons in the following sections.

2.1.

Side Information Reconstruction

The MCTI35 is an image reconstruction/interpolation method, in which block ME and MC are utilized to explore temporal correlation of monoview videos. To interpolate for the current frame, I2t, the MVs estimated from its previous frame I2t1 and the next frame I2t+1 are halved for bidirectional MC to yield the interpolated SIF, YSI. The MVME scheme26 carried out at the decoder is shown in Fig. 2, in which KFs, Is, are coded with H.264/INTRA and the WZF is to be reconstructed with its SIF. For one WZF, two ME paths can be adopted: the inner path is estimated by performing disparity vector estimation followed by MV estimation, as demonstrated by Fig. 3(a); the outer path can be obtained by reversing the above two vector estimation procedures, as shown in Fig. 3(b). To interpolate for each block with N×N pixels in the WZF, let the side-view image at time 2t1, Iside(2t1), be the target image, in which a best matched block, with a disparity vector, vd, corresponding to the co-located block in the central-view image, Icentral(2t1), is found. The best matched block in Iside(2t1) is then used to find out another best matched block from Iside(2t) with a MV vm. This procedure would yield one reference vm, or one inner path MV, for the co-located block in the current WZF. By applying the same procedure to the other three sets of reference images, three other inner path MVs can be found for the current block in the WZF. The outer path MVs can be obtained by the same procedure but with MV estimation first and then disparity vector estimation.

Fig. 2

The general MVME framework.26

JEI_25_6_063022_f002.png

Fig. 3

Practical implementations of MVME.26 (a) Inner path MVME and (b) outer path MVME.

JEI_25_6_063022_f003.png

When all ME paths of the WZF are included, i.e., four inner and four outer paths to perform MVME, it yields eight estimated frames. This SIF can be reconstructed by taking the weighted/nonweighted average of these corresponding blocks of estimated MVs. Although the MVME provides several estimated MVs for reference, it suffers from heavy computation. In addition, it may lead to trivial estimation errors for no-motion blocks. The MVME approach utilizes the general ME operations, designed for intraview video, to estimate disparity vectors among interview images. To bridge this inherent gap between vm and vd estimation, we proposed to estimate the homography matrix to perspectively transform the side-view video to be with central view such that applying ME on interview images would be perfect. This H-MVME approach can yield better PSNR performances than MVME. In addition to handling the MVME in the hybrid approach, we proposed to eliminate trivial ME operations for no-motion blocks and perform BMP based on calculating the weighted sum of MC blocks reached through different MVs, denoted as COMPETE as described above, to improve the MVME to yield high quality SIF. In case the disparity/MV estimation was operated on outlier, i.e., regions without correspondent pixels resulting from performing perspective transformation, the temporal MCTI is adopted to interpolate for the current block in the WZF.

2.2.

COMPETE Side Information Reconstruction

The COMPETE SIF reconstruction method is proposed to enhance the H-MVME to yield SIF with higher confidence. When homography matrices are not available for perspective transformation, we utilize the SIFT feature extraction and the RANSAC procedure to estimate homography matrices and then utilize BMP to yield high confidence SIF.

2.2.1.

Homography

The homography relates the pixel coordinates in two images. When it is applied to every pixel, the new image is a warped version of the original one. However, this homography relationship is independent of the scene structure. To be more specific, one homography matrix, H, which is 3×3, can transform one camera view to another.34 To estimate the Hv{l,r}s, the SIFT24 algorithm is first applied on the video images of different views, L, R and I, to find stable key feature points. Tentative feature point pairs between two images are selected to provide candidate homography matrices, Hv{l,r}. The feature point pairs and candidate Hv{l,r}s are iteratively selected and justified by finding the maximum consensus set through the RANSAC procedure to yield the best Hv{l,r}. At this stage, it seeks to find all correspondent SIFT points, or matching pairs, between two different view images. Mismatches will occur in that the matching process assumes proximity and similarity, and there are some correspondence located in outliers. In general, the RANSAC outperforms gradient descent methods36 in that too many outliers will prevent the latter from converging to the global optimum.

2.2.2.

Scale-invariant frame transform

The SIFT24 procedure helps to represent one image with robust feature points. It transforms one image into scale-invariant feature coordinates corresponding to local features. This procedure would ignore low contrast feature points and eliminate edge response to filter out the remaining stable keypoints.

2.2.3.

Interpolation and homography

The SIF at the turbo decoder is generated by the “interpolation/homography” module, as shown in Fig. 4. We proposed to exploit correlations among interview images, in addition to intraview ones, to eliminate reference SIFs from having severe disparity. The reference central-view images can be obtained through the homography matrices, Hl and Hr, from left- and right-view images. To estimate the Hl and Hr, the first intracoded frames, L^0, R^0, and I^0, received and reconstructed at the decoder, are used as sample images to extract correspondent stable SIFT features between left/right-view and central-view images. To estimate the homography matrix based on the correspondent feature points, the RANSAC procedure was carried out to find the matrices, Hl and Hr, which yielded maximum inliers. The reference central-view images can then be obtained by performing perspective transform through Hl and Hr from the decoded left- and right-view images, L^ and R^, i.e., L^=Hl(L^) and R^=Hr(R^), as shown in Fig. 5(a). With the reference central-view images, the BMP procedure can be carried out to yield the SIF, I^2tint. For one multiview video, the homography matrix that transforms the side-view video to be with central view has to be estimated only once with reference to {R^0,I^0,L^0} at the beginning of decoding. With the homography matrix estimated optimally through the SIFT and the RANSAC procedures, the BMP among L and R, and the original decoded one I^2t1 are performed to yield the SIF, described in the following section.

Fig. 4

The block diagram of interpolation/homography.

JEI_25_6_063022_f004.png

Fig. 5

The ME implementation of a MVC: (a) the perspective transform of left and right views and (b) the block matching search between central view and perspective transformed images: X denotes the colocated block of Bi in I^2t+1, and Y is the best matched block of Bi.

JEI_25_6_063022_f005.png

2.3.

Block Matching Prediction

Performing perspective transformation from side view to central-view frames will result in an outlier, miss transformed area, as shown in Fig. 5(a). The perspectively transformed images, L^s and R^s, and the reconstructed central-view images, I^2t1s, are used to perform block matching to estimate disparity and MVs, denoted as vd and vm, respectively. The SIF of a central-view image not transmitted can be reconstructed through weighted motion compensated prediction by above vms and vms, in which the latter were estimated from I^2t±1s. This BMP process would reconstruct the SIF, I^2tint, shown in Fig. 5(b), where Bi is the block in I^2t1, vdi and vmi are the disparity and MVs estimated between reconstructed interview images, e.g., {L^2i1,I^2t1} and {R^2t1,I^2t1}, and between I^2t±1s, respectively. The COMPETE flowchart is shown in Fig. 6. One I^2t1 is partitioned into M 8×8 blocks, {Bi(I^2t1)|i1,,M}, and a large block LBi(I^2t1) consists of 2×2 blocks, i.e., LBi(I^2t1)={Bi11,Bi12,Bi21,Bi22}, in which Bi11 is the current block, i.e., Bi=Bi11. The four block MVs in LBi, (vm11,vm12,vm21,vm22), are obtained by performing motion estimation (ME) between I^2t1 and I^2t+1 for the co-located LBi. If (vm11,vm12,vm21,vm22)=0, it means Bi in LBi is a no-motion block and can be reconstructed by direct copy from its previous image, i.e., Bi11(I^2tint)=Bi(I^2t1). If (vm11,vm12,vm21,vm22)0, then Bi is a motion block and the corresponding disparity block in side-view transformed images, L^ and R^, and Bi’s MVs are combined with weights proportional to block fidelity to yield a more accurate compensated block for the Bi in I^2tint. We take the ME process for a Bi by referencing left- and central-view images as an example and the right-view one can be carried out in the same way. The first-phase block disparity estimation is performed between I^2  t1 and L^2t1, denoted as BM2×2(Bi):Bi(I^2t1)Bi(L^2t1), which will yield the best matched block from L^2t1 with a vd. If the best matched block does not reside on the outlier of L^2t1, the second-phase block ME is performed, in which the search range in L^2t is two blocks wide along vertical and horizontal directions and centered at the co-located coordinate of Bi on L^2t1 with the offset vd. It yields one vm1, and the second vm2 can be obtained by the same procedure BM2×2(Bi):Bi(I^2t+1)Bi(L^2t+1). The other two MVs, vm3 and vm4, are estimated from the right-view video through the same procedure. When performing MC for an I^2i, if any image block reached through the inner-path MV, vmj, resides on the outlier, then its wj is set zero. Let Bi(I,v) denote the image block obtained from the co-located block on an I with its MV, v, and the Bi reconstruction for the SIF, I^2tint, can be represented as

Eq. (1)

Bi(I^2tint)=j=1,3wj·Bi(I^2t1,vmj)+j=2,4wj·Bi(I^2t+1,vmj),
where the first term yields the weighted central-view image by utilizing MVs of I^2t1 and the second one from I^2t+1. In general, the wj should be proportional to the normalized fidelity of the corresponding best matched block with respect to the co-located blocks in central view. The wj for one MC block reached through vmj can be computed as

Eq. (2)

wj=1SADj/j=141SADj,j4,
in which SADj denotes the sum of absolute distortion whose reciprocal can be used as block fidelity. On the other hand, if all matched blocks reside on the outlier, there would be no prediction result that can satisfy the assumed scenario. Under this condition, only the reference MV, vm, estimated between the two reconstructed central-view images, I^2t1 and I^2t+1, can be used to predict the SIF. The bidirectional MC is used to reconstruct the block of the SIF:

Eq. (3)

Bi(I^2tint)=12[Bi(I^2t1,vm112)+Bi(I^2t+1,vm112)].
To further yield the optimal weight wj for a MC block Bi(I,v), the linear minimum mean squared error (LMMSE) estimator can be adopted. How to compute the LMMSE weights, wjs, is described in the Appendix. Experiments showed that adopting LMMSE weights can improve the SIF PSNR up to 0.1 and 0.3 to 0.4 dB for low and medium-to-high complexity videos, respectively, compared to those adopting weights proportional to block fidelity presented by Eq. (2).

Fig. 6

The flow chart of COMPETE.

JEI_25_6_063022_f006.png

In our experiments, the COMPETE is operated under the frame ratio KFWZF=51, while the fusion-based homography method is KFWZF=11. The COMPETE can also be adapted to operate under the ratio KFWZF=11. In the COMPETE, it needs to transmit the first KF of each view to estimate homography matrices, as shown in Fig. 7(a), and there are one MV and two disparity vectors that can be used to interpolate for the SIF of I^2tint. To interpolate for the SI of side-view images, say L^2t+1int, only one MV and one disparity vector can be referenced, as shown in Fig. 7(b). For the last central-view image, only two disparity vectors can be referenced to interpolate for its SI, as shown in Fig. 7(c). When the WZF/KF ratio is larger than 1, it requires learning-based approaches37 that apply an expectation maximization algorithm for unsupervised learning of MVs.

Fig. 7

Perform COMPETE on different GOP structures with KFWZF=11. (a) One vm and two vds for I^2tint; (b) one vm and two vds for L^2t+1int; and (c) two vds for I^2t+2int.

JEI_25_6_063022_f007.png

3.

Multiview Distributed Video Coding Rate Control Algorithm

The internal signal processing flow of the MDVC (Fig. 1) is shown in Fig. 8. The encoder E comprises both H.264 and WZ encoders, in which the left- and right-view images, {Lt} and {Rt}, would be encoded by the former to yield KF bitstreams, sl and sr, respectively. The central-view images, {It}, are separated into odd and even image sequences, {I2t1} and {I2t}. The odd images are encoded by H.264 Intra to provide the KF bitstream so and the even ones by the WZ encoder with appended cyclic redundancy check (CRC) checksum to yield parity bits p˜2t. For adaptive rate control, the RCPT28 code is adopted for channel coding, because it performs near the Shannon limit at low SNR, while providing excellent throughput at high SNR.28 The WZ encoder will determine whether to send more parity bits or not based on the feedback requested bits NAK from the WZ decoder. The decoder D comprises one H.264 decoder, one WZ decoder, and one interpolation/homography function module. The received bitstreams, sl, sr, and so, will be decoded by the H.264 decoder to yield reconstructed images of left-, right-, and central-view odd images, L^t, R^t, and I^2t1, respectively. They are inputs of the interpolation/homography modules that will reconstruct the SI, an interpolated central-view image I^2tint, for the WZ decoder to reconstruct I^2t with reference to I^2tint. The multiplexer combines the reconstructed I^2t1 and I^2t to yield the final central-view video {I^t}.

Fig. 8

The MDVC codec framework.

JEI_25_6_063022_f008.png

3.1.

Wyner–Ziv Coding

The WZ encoder in the MDVC system is shown in Fig. 9. The input image, I2t, is divided into blocks with 4×4  pixels, which are then transformed to frequency domain coefficients, c2t, through T, and quantized through Q to yield the quantized coefficients, q2t. To reduce encoding complexity, the integer DCT is adopted for low complexity hardware implementation. In c2t, the DC coefficient comprises most of the block signal energy and will be allocated more bits than other higher frequency ones, ACs. Coefficients in the 4×4 block, c2t, are partitioned into different bands. Each coefficient band is uniformly quantized with a 2bk level quantizer (Q), where bk denotes the number of bits assigned to the k’th coefficient. The number of quantization levels, 2bks, for a 4×4 DCT coefficient block38 is determined through an optimal bit allocation procedure on the c2t coefficients.

Fig. 9

The WZ encoder.38

JEI_25_6_063022_f009.png

In practical implementation, the quantization stepsize of the i’th coefficient, Δi, was setup with a loading factor, σ=4, for a certain coefficient probability density function (PDF),39 i.e.,

Eq. (4)

Δi=4σi2bk,for  bk0.

After quantization, each coefficient is represented by its quantization index q2t. For simple demonstration, the parity bits generating process for one 16×16 image is provided. The 16×16 image is decomposed into sixteen 4×4 blocks on which DCT is performed, and the number of bits to represent the quantized indexes of DCs and ACs are 4 and 3, respectively. The DCs and ACs of these sixteen 4×4 DCT blocks are rearranged such that the same frequency coefficients are grouped together and queued with zigzag scan order, i.e., {DCi}i=1,2,,n {AC1i}i=1,2,,n,{AC2i}i=1,2,,n,,{ACai}i=1,2,,n, where n is the number of total blocks in the image and a is the number of ACs for a certain quantization pattern, as shown in the upper image of Fig. 10(a). For turbo encoding, these regrouped 4×4 DCs blocks are subject to bit-plane extraction, as shown in Fig. 10(a), such that the same significant bits are grouped together and transmitted by bit-plane order, i.e., MSBk={MSBki|i=1,2,,16} for k=1,2,,K, where i is the index of the original 4×4 blocks and k is the bit-plane index. For regrouped 4×4 ACs blocks, the above transmission order is reversed, i.e., from the LSB to the MSB. The bit-stream of these reordered bits, b2t, is then used as the input to the CRC encoder, which appends checksum of b2t and passes it to the turbo encoder. After performing interleaving by the turbo encoder, it yields the parity bit-streams, p˜2t=P˜i1P˜i2, which can be represented as P˜i1={p˜11,p˜21,,p˜161,} and P˜i2={p˜12,p˜22,,p˜162,}. Both parity bit streams are punctured with specific patterns of period ψ=16 to form sub-blocks queued in the transmission buffer, denoted as P˜2t1 and P˜2t2, which will be sent to the decoder upon request. The puncture pattern is designed to select parity bit according to the specified priority, as shown in Fig. 10(b). For turbo decoding, the skipped systematic bits at E are replaced with the reconstructed SI at D, which would be reconstructed by different methods. The turbo decoder would request more parity bits in case it cannot correctly recover the data. In general, when the SI confidence is high, it would request fewer parity bits and improve the WZF quality. Detailed rate control steps will be described in Sec. 3.2.

Fig. 10

The parity bit generation and transmission order in the puncture patterns of DCs and ACs with period ψ=16. (a) The parity bits generation process, (b) the sub-block queuing pattern of DC, and (c) the sub-block queuing pattern of AC.

JEI_25_6_063022_f010.png

To reconstruct the WZF, I^2t, from the received parity bits sub-block, {P˜2t1,P˜2t2}, at the WZ decoder shown in Fig. 11, it needs to generate the SI, I^2tint, by the interpolation/homography module, as shown in Fig. 8. Before turbo decoding, the same T and Q processes will be applied to I^2tint to yield c^2tint and q^2tint, respectively. To increase the SI confidence for turbo decoding, the distributions of error between reconstructed SIF and the original WZF are modeled as Laplacian. A transform-domain correlation noise model parameter updating procedure29 was applied to fit coefficient error distribution for each 4×4 block with the Laplacian model. Since the original image encoded as a WZF is not available at the decoder, the MCTI image, I^2tint, interpolated from I^2t±1ints, was used instead. After being processed by T and Q, the indexed signals, q^2tint, are reordered, grouped, and extracted by bit-plane to provide the system bits, b^2tint, for the turbo decoder. The turbo decoder performs the logarithmic maximum a “posterior” algorithm, Log-Map, with the help of received parity bits sub-blocks, {P˜2t1,P˜2t2}, and CRC checksum verification, under a certain confidence measurement40 to determine either the decoding process is convergent or to request more bits for the next iteration. After b^2t being decoded correctly, it is reversely processed by the combining bit-plane module to yield the quantized index, q^2t, which are used as the input of the reconstruction module to refine c^2tint for c^2t.

Fig. 11

The WZ decoder.

JEI_25_6_063022_f011.png

The optimal reconstruction function that exploits the correlation between the original image for WZF and SI14 is adopted, in which the distribution of the residual signals between the original WZF and the reconstructed SIF is assumed to be Laplacian and it seeks to find the reconstructed samples that demonstrate MMSE. The optimal reconstruction value, c^2t, is the expectation c^2t=E[c2t|c2t{Δil,Δir},c^2tint], where Δil/Δir denote the lower/upper boundary of the interval Δi that c^2tint resides, and the expected value yields the MMSE estimation of the source WZ. This procedure will prevent the reconstructed values from deviating from the original value too much due to low SI confidence. At the last stage, the c^2t will be inversely transformed to yield the final reconstructed image, I^2t.

3.2.

Rate Control Mechanism

To improve the decoding efficiency, we proposed to impose specific puncture patterns with transmission order according to signal distribution properties for DCs and ACs, respectively. In the COMPETE framework, we proposed to collect all same order DCs/ACs together, which are then zig-zag scanned for turbo encoding. For block DCT-based video coding, the DC coefficient usually contains most block signal energy. Its MSBs contribute much more signal energy than LSBs, such that the assigned priority of the former is higher than the latter. As shown in Fig. 10(b), the system is designed to transmit the first MSBs of all DCs and then the second MSBs. The magnitude of ACs would be much smaller and around zero magnitude. Since ACs may be positive or negative, by taking its absolute value, it would lead to more skewed magnitude probability distribution. The “sign bit” of quantized ACs can be replaced by that of the quantized SIF at the decoder, under which the probability of LSBs to be 0 would be larger than MSBs, when represented with a fixed number of bits. As opposed to DCs, it transmits the LSB first and then the second LSB41 to speed up turbo decoding, as shown in Fig. 10(c). This transmission strategy for DCs and ACs helps to correct the decoding errors of systematic bits with fewest requested parity bits. Experiments showed that this rate control strategy yields 55% to 59%, fewer requested bits for the turbo decoder.

The proposed rate control algorithm, developed based on the RCPT puncturing mechanism,28 is demonstrated in Fig. 12.

Fig. 12

The proposed RCPT-based rate control algorithm.

JEI_25_6_063022_f012.png

In the COMPETE system, the RCPT code is designed to be with rate 1/3 and puncturing period ψ=16, which is formed from two rate 1/2 recursive systematic convolutional constituent codes with generator 1+D+D2+D41+D2+D3. The puncturing table with different rates, {1616+V|V=0,1,,32}, will be generated, in which V=0 will not be used because the systematic bits will be discarded under the DVC framework. Figure 10 demonstrates part of the corresponding puncture table. When the first sub-block parity bits were received, the decoding would be carried out based on the CRC alone.28 When receiving those of the second sub-block, it would decode the first constituent encoding data and the iterative turbo decoding will start after the third sub-block being received, in which the maximum iteration number, Titer, is set. When decoded results are converged, i.e., an all-zero syndrome of CRC checking or the number of iteration exceeds Titer, the resultant bitstream will be subjected to a second confirmation procedure. Notwithstanding, a larger Titer will lead to heavy computation and the tradeoff between setting Titer and heavy computation should be well manipulated. The value of Titer is determined from experiments on different complexity test videos under different bit rates that can yield convergence. The confidence measurement with the criteria ConfPr 103,40 in which

Eq. (5)

ConfPr=ULB,
where LB is the predefined block length for decoding and U is number of uncertain bits whose absolute decoded likelihood ratio, Pr(Xi=B|Y)Pr(Xi=1B|Y), is not higher than 0.99. The decoding is successfully completed when both CRC check and confidence measure, ConfPr103, are satisfied. When CRC passes but confidence measure fails, i.e., ConfPr>103, more sub-block parity bits will be requested by the ARQ mechanism for the next iteration operations until all sub-block bits are sent or the bitstream is decoded successfully.

To improve the turbo decoding performance while requesting fewer parity bits, the correlation among coefficient bit planes was exploited and utilized to estimate the posteriori probability, which is used as the priori probability for turbo decoding. The probability distribution of the difference between a SIF and the original image coded as a WZF is assumed to be Laplacian, i.e.,

Eq. (6)

pq^2tint(n)=α2eα|q^2tintn|,α=2σx2,
where σx is the variance of residual signal between a WZF and a SIF.29 The b’th decoded bit of DCs (DC1) is represented as

Eq. (7)

b^bargmaxi(0,1)PrDC1(i|q^2tintb^b1,,b^2,b^1),
where PrDC1(i|q^2tint,b^b1,,b^2,b^1) is the posteriori probability of b^b=i for DC1. When decoding b^3 of DC1, both b^2 and b^1, and the reconstructed SI, q^2tint, are jointly referenced to specify the probability. Figure 13 shows an example to estimate the probability PrDC1(b^3=1|q^2tint,b^2,b^1)40 of a quantized DCs represented with four bits, b1b2b3b4 from MSB to LSB. The probability integrated from the shaded interval is for PrDC1(b^3=1|) and PrDC1(b^3=0|) can be calculated in the similar way. The turbo decoder will update the priori probability:

Eq. (8)

logPrDC1(i=1|q^2tint,b^2,b^1)PrDC1(i=0|q^2tint,b^2,b^1),
and performs log-MAP decoding. Experiments verified that this probability estimation and updating method helps the decoder to request fewer parity bits and reduces the turbo decoding time.

Fig. 13

The probability estimation when decoding b^DC1,3, in which the q^2tint is obtained from the reconstructed SIF and the PDF is specified in Eq. (6).

JEI_25_6_063022_f013.png

4.

Simulation Study

The COMPETE encoding performance is compared with other SIF reconstruction methods, such as MCTI, fusion-based homography (F-HOMO), MVME, and H-MVME, for evaluation. The H-MVME is the extension of MVME,26 in which estimated image blocks that reference to the outlier are obtained through MCTI. In the F-HOMO, both SIFs reconstructed from inter- and intraview images through DCVP42 and MCTI, respectively, are fused to yield the final SIF. The quality of I^2tWZ, which is reconstructed with its SIF generated by the above methods, is compared with those from H.264 with inter-, intra-, and inter-no-motion mode. The multiview CIF videos, Race1, Ballroom, Breakdancer, Exit, Ballet and Vassar, provided by ISO/IEC43 are used as test videos, whose frame rates are 30, 25, 15, 25, 15, and 25 fps, respectively. These videos present different scene complexities rated from high to low, in which the “Race1, Ballroom, and Breakdancer” are classified as high complexity videos, “Exit” as medium and (Ballet and Vassar) as low complexity ones, respectively. Three successions of the six views from a multiview video are used to provide left-, central- and right-view videos. For H.264, the CABAC function is enabled and GOP size is 12 for inter- and inter-no-motion modes. The ME search range for the former is set to be 32 and zero motion is assigned for the latter. For the H.264 coder to yield compromised decoded quality for different videos, different quantization parameters (QPs), QP{30,28,26,24,20,18}, are used for different complexity videos. The MDVC codec adopts |GOP|=2, in which the side-view video and central-view odd frames are encoded with H.264/INTRA to provide KFs for the decoder to reconstruct I^2tWZs. The quality of reconstructed I^2tWZs with reference to the four SIF generation methods is compared by image PSNRs for evaluation.

4.1.

Performance Analysis

To evaluate the performance of the proposed COMPETE, the error analysis based on reconstructed blocks is first carried out to investigate the signal processing behavior. Four SIF reconstruction methods, which comprise MCTI, F-HOMO, MVME, and H-MVME, are also implemented for comparisons. The SI confidence, quality of reconstructed WZFs, I^2tWZs, and time complexity of different methods are compared and evaluated. The time complexity of SI generation and encode/decode execution time will be discussed in Sec. 4.2.

4.1.1.

Error analysis

The error distributions of the COMPETE and MVME are investigated to justify how the SI confidence can be improved. In the COMPETE algorithm, by performing intraview ME between central-view images, blocks are classified into motion or no-motion to eliminate unnecessary ME/MC operations. For no-motion blocks, the co-located block of the previous frame is used as the MC blocks with zero motion. For motion blocks, when the search range comprises regions belonging to the outlier, only intraview ME on central-view images is performed. Otherwise, the regular weighted MVME process is carried out. Denote the number of no-motion, motion, and outlier blocks in one frame as Kn, Km, and Ko, which can be normalized as kn, km, and ko, respectively, i.e., kn+km+ko=1. In the COMPETE, the MC interpolated frame can now be represented by

Eq. (9)

I^2tint=BnBmBo,
where Bτ={Bτ(i)|i=1,,kτ} denote the set of type τ blocks for τ{n,m,o} and the number of blocks in the set Bτ is Kτ. The variance of block errors can be represented as

Eq. (10)

σB2=1τKτE[(I2tI^2tint)2]=τ{n,m,o}kτ·E[Bτ(I2t)Bτ(I^2tint)2|Bτ(I2t)Bτ].

For one image block, a specific ME procedure corresponding to its categorization, i.e., motion, no-motion, or outlier, will be imposed. Table 1 shows the percentage of each block category for different videos and Table 2 shows the mean absolute error of block difference, between the original image and its reconstructed SIF, for the six test videos. As shown, the percentage of outlier blocks is very small and their average reconstruction error by the COMPETE is smaller than that of MVME. Both estimated intraview MV and interview disparity vector are utilized to improve the SI confidence, in which the four MVs through inner paths are utilized to perform intraview weighted MC for a central-view SIF. This SIF demonstrated higher confidence than that reconstructed through average MC in both MVME and H-MVME. As shown in Table 2, the average error of reconstructed blocks of the proposed COMPETE is smaller than that of MVME. Table 1 shows that the percentage of no-motion blocks is the highest, which are mostly from the background region or static foreground objects. For no-motion blocks, the proposed COMPETE effectively eliminated the time consuming ME process and prevented noisy MVs resulting from regular ME process of other methods. For example, the MVME method, instead of identifying no-motion blocks and skipping the time-consuming ME process, treats all as motion blocks but does not yield a more accurate estimation, as shown in Table 2. For motion blocks, the MVME does not differentiate interview disparity vector with intraview MV, such that the MC blocks would be more degraded as compared to that of COMPETE. As the COMPETE compensates no-motion blocks by the colocated ones of the previous decoded frame, in addition to reducing time complexity, the ME errors can also be decreased. In total, the proposed COMPETE effectively yielded higher SI confidence while reducing time complexity, as compared to MVME.

Table 1

No-motion, motion, and outlier blocks distribution at QP=26.

VideoBlock type
Motion blocks (%)No motion blocks (%)Outlier blocks (%)
Race172.1525.672.18
Ballroom37.6460.062.30
Breakdancer42.4854.473.05
Exit20.6678.201.14
Ballet17.1781.601.23
Vassar3.7196.080.21

Table 2

The comparison of estimation errors.

VideoBlock type
Motion Blocks (MAE)No-Motion Blocks (MAE)Outlier Blocks (MAE)
COMPETEH-MVMECOMPETEH-MVMECOMPETEH-MVME
Race1194.28423.111.461.79255.08292.30
Ballroom302.47356.743.363.46310.66323.96
Breakdancer694.01742.191.541.79179.59184.42
Exit729.95804.741.912.73191.55206.61
Ballet134.69281.631.712.11129.74127.08
Vassar82.81364.242.592.70182.27196.24

4.1.2.

Side information confidence

The SI confidence in PSNR achieved by MCTI, F-HOMO, MVME, the COMPETE with direct linear transform (DLT) homography matrix generation method and the COMPETE performed on all test videos are shown in Fig. 14. As shown, the MCTI performance was severely degraded for high motion videos, Race1, Ballroom and Breakdancer, since it assumes linear motion and interpolates frames only along temporal dimension. For Race1, the SIF by COMPETE is 6.2 to 7.9 dB higher in PSNR than MCTI because it is a panning shot of moving objects such that MCTI cannot find the correct MVs to reconstruct SI. For the F-HOMO, it adopts pixel-based fusion and would lead to image discontinuity artifacts when fusing disparity synthesized and temporal interpolated (MCTI) images. The H-MVME outperforms MVME26 with 0.5 to 3 dB higher PSNR for both high and low complexity videos. For MVME, it performs ME from both inter- and intraview KFs, which may lead to false/trivial ME and degraded quality, in addition to being time consuming. The H-MVME improves the MVME by eliminating the interview disparity. The proposed COMPETE estimated MVs with reference to perspectively transformed images, I^2t1v, and detected no-motion blocks to eliminate regular ME operations. The SIFT followed by RANSAC would help to yield more stable matching point pairs, as compared to the COMPETE followed by DLT, as shown in Fig. 14. In comparison, the proposed COMPETE not only achieves the same reconstructed image quality as that of H-MVME but also decreases computation complexity. For the “Ballet,” the SIF by COMPETE is 0.1 to 2.3 dB higher in PSNR than H-MVME because the disparity problem of interview ME has been solved by the block prediction through perspective transform. In comparison, the COMPETE effectively reduced computational complexity and well utilized interview and temporal correlations to eliminate disparity block matching noises. Experiments also justified that the proposed COMPETE can yield the best SI confidence, as compared to the others.

Fig. 14

Comparisons of SIF confidence in PSNR among different reconstruction methods applied on six test videos. (a) Race1, (b) Ballroom, (c) Breakdancer, (d) Exit, (e) Ballet, and (f) Vassar.

JEI_25_6_063022_f014.png

4.1.3.

Objective performance evaluation

The PSNRs of I^2tWZs coded by the five methods under the MDVC framework and reconstructed images by H.264, with intra-, inter- and inter-no-motion, are calculated for comparisons. The rate-distortion performance is similar to that of the SI confidence. For high-complexity videos, e.g., Race1, Ballroom, and Breakdancer, the SI confidence in PSNRs is comparable to COMPETE and H-MVME, both of which are 0.9 to 7.8 dB higher than MCTI and F-HOMO, as shown in Fig. 14. The reconstructed WZFs with the COMPETE, I^2tWZs, are 0.8 to 2.9 dB higher in PSNR than those of MCTI and F-HOMO, as shown in Figs. 15(a)15(c). For high-complexity videos, both MCTI and F-HOMO cannot estimate accurate MVs to compensate for the reconstructed SIFs, which leads to more degraded WZFs. Both COMPETE and H-MVME yield higher SI confidence and hence better reconstructed quality for I^2tWZ. The COMPETE yielded 0.4 to 1 dB higher PSNR than H.264/INTRA for Breakdancer, 1 to 1.5 dB higher than H.264/INTRA for Ballroom and 0 to 0.5 dB higher than H.264/INTRA for Race1. The H.264 intra/inter-no-motion cannot well encode Race1, because the camera was tracking a moving object. For the medium-complexity video, Exit, the SI confidence in PSNRs reconstructed by the COMPETE is 2.4 to 3.9 dB higher than those of MCTI, as shown in Fig. 14. The average PSNRs of I^2tWZ are 3.5 and 2.5 dB higher than those reconstructed from H.264 intra and MCTI, respectively, as shown in Fig. 15(d). For low complexity videos, Ballet and Vassar, as they demonstrate more static regions, the interpolation and fusion process can perform efficient for all methods and results in smaller difference of PSNR performances. The COMPETE yielded 0.8 to 2 dB higher PSNR than MCTI for I^2tWZs, and 1.5 to 2.2 dB higher than H.264/INTRA, as shown in Figs. 15(e) and 15(f). In addition, although the MVME-based methods,26 e.g., MVME, and H-MVME, demonstrate comparable PSNR performances with COMPETE, their time complexity is high. Experiments showed that the COMPETE outperforms the others in SIF and WZF, I^2tWZ, quality, in that it prevents reconstructing blocks in static regions from noise attacks during interpolation and block matching processes. Note that the KF quality setting would impact the SI confidence, and the KF quality depends on QP selection. To justify the COMPETE capability in improving the MDVC codec performance, the average image PSNR of KFs and WZFs under a fixed bit budget is provided for comparisons. As shown in Fig. 16, the COMPETE outperforms the others in PSNRs from 0.4 to 4 dB under different bitrates for both high and low complexity videos, Race1 and Vassar, respectively.

Fig. 15

PSNRs of reconstructed WZFs when encoding (a)–(c) high, (d) medium and (e) and (f) low complexity videos. (a) Race1 WZF, (b) Ballroom WZF, (c) Breakdancer WZF, (d) Exit WZF, (e) Ballet WZF, and (f) Vassar WZF.

JEI_25_6_063022_f015.png

Fig. 16

The average image PSNRs comprising KFs and WZFs under different bitrates on high and low complexity videos: (a) Race1: KF + WZF and (b) Vassar: KF + WZF.

JEI_25_6_063022_f016.png

Experiments revealed that high confidence SI is much more important than the rate control method in DVC coding: (1) When SI confidence is low, the decoding confidence measure, ConfPr in Eq. (5), would not satisfy convergence condition, ConfPr103. Under this condition, either the rate control procedure was carried out or the decoder requested more parity bits, and the ConfPr could hardly converge. (2) When SI confidence is high enough and the rate control procedure transmits high priority parity bits first, the number of decoding iterations would be reduced and the convergence criteria, ConfPr103, would be reached quickly. One practical turbo decoder example44 shows that when KFs are severely attacked by channel noise, which leads to low confidence SI, the PSNRs of reconstructed WZFs will degrade rapidly because the turbo decoder cannot recover one WZF from a severely degraded SIF. The number of average requested bits and bit rate saving under different SIF reconstruction methods is provided and compared in Tables 3 and 4, respectively. As shown in Table 3, the proposed COMPETE requested the fewest parity bits among the four methods because it can yield the highest SI confidence. Table 4 shows that the proposed control mechanism enables the four SI reconstruction methods to largely reduce the requested bit rates.

Table 3

The average requested bit rate of different SIF generation methods (15 FPS QCIF).

VideoSI generation
SIF
MCTIF-HOMOH-MVMECOMPETE
Race1125.81128.3273.6369.06
Ballroom89.2989.5682.5079.93
Breakdancer44.6840.5634.1430.52
Exit46.2252.7745.5238.54
Ballet25.9925.8322.7220.65
Vassar40.1944.9237.3336.32

Table 4

The turbo decoded bit rate comparisons W/ AND W/O rate control mechanism (15 FPS QCIF).

VideoSI generation
MCTIF-HOMOH-MVMECOMPETE
w/w/ow/w/ow/w/ow/w/o
Race1125.81278.96128.32284.5273.63167.3469.06157.31
Ballroom89.29197.5489.56198.5882.50183.7479.93181.65
Breakdancer44.68101.7840.5693.6734.1479.5830.5272.32
Exit46.22105.0552.77118.8545.52105.3738.5491.54
ballet25.9961.7325.8361.5022.7254.7520.6550.37
Vassar40.1991.3444.92102.5537.3387.8436.3285.66

4.2.

Time Complexity Analysis

The time complexities of the proposed COMPETE, together with the other SI reconstruction methods, are analyzed and discussed. At first, the number of arithmetic operations, addition/subtraction and multiplication/division, required to reconstruct SI is calculated for time complexity analysis. The practical execution time is also measured to justify the time analysis. Denote the image width and height as W and H, respectively, and the block size and search range as Bw and Sr, respectively.

4.2.1.

Motion Compensated Temporal Interpolation

The MCTI performs intraview ME between images I^2t1 and I^2t+1 and then performs motion compensated prediction to interpolate SI for WZFs. It performs subtraction and addition operations to yield the absolute difference summation. For one block, it needs Bw2 subtractions and Bw21 additions to calculate the block error. As the search area is Sr2, it requires (2·Bw21)·Sr2 operations for one block to finish ME operations. The number of total operations for one image to finish ME is (2·Bw21)·Sr2·(H·WBw2)2·Sr2·H·W. The time complexity of MCTI is denoted as TMCTI=2·Sr2·H·W.

4.2.2.

Fusion-based homography

The fusion-based homography was implemented based on the Fusion 1 algorithm in Ref. 15. After performing perspective transformation, the synthesized perspectively transformed images, denoted as I^v(2t)=synthesis[I^l(2t),I^r(2t)], and the temporarily interpolated image, I^cint(2t), are considered as candidates for the fusion-based central-view image. For each pixel of the SIF to be reconstructed, it seeks to find the one, between I^v(2t) and I^cint(2t), that yields the minimum distance to both the previous and the next central-view image pixel values. Estimation of the initial 3×3 homography matrix can be performed off-line, whose time complexity can be ignored. For perspective transformation, it needs 15 MUL/ADD operations for each pixel and 2·15·H·W to yield the two reference central-view images. To obtain the fusion-based image, it needs 2·H·W and 2·Sr2·H·W for temporal interpolation. In total, it needs 4·H·W operations to find the pixel that yields the minimum pixel value difference. The number of total operations for the fusion-based homography is (36+2Sr2)·H·W. The time complexity of this method is denoted as Thomography=(36+2Sr2)·H·W.

4.2.3.

Hybrid Multiview Motion Estimation

The H-MVME is an improved MVME.26 In MVME, four ME vectors through inner paths are obtained and averaged to yield the motion compensated prediction image. As the MVME algorithm is designed based on the assumption that when the optical axes of all cameras are orthogonal to the motion. For multiview video, homography transformation is required and there exists an outlier that the MVME may not applicable. In H-MVME, it performs bidirectional temporal MC when the search range resides on the outlier. The required operations comprise performing the four inner paths ME 4·2·TMCTI, calculating weights (three ADDs and eight DIVs) 11·H·WBw2 and calculating the average 7·H·W. The number of total operations TH-MVME is (11Bw2+7)·H·W+8·TMCTI8·TMCTI. Its time complexity is smaller than that of TMVME, which is 16·TMCTI.26

4.2.4.

COMPETE

The design target of the proposed COMPETE is to keep high quality reconstruction while reducing computation complexity. At first, it needs to perform perspective transformation from side-view images to be with central view, which requires 6·15·H·W operations (three left- and three right-view images). Then, it performs block ME and checks whether it is a motion block or not. It needs at least TMCTI operations. Assume the ratio of motion and no-motion blocks is 11. For no-motion block, direct copy from the co-located block of the previous image is adopted, and no operation is required. For motion blocks, the search range for finding disparity vectors can be minimized to Sr2/16 in that the reference frames are perspectively transformed from side-view images. The COMPETE, as well as H-MVME, performs four inner paths ME two times. For interview ME, the first disparity vector estimation requires 4·(2Bw21)·Sr216·(H·WBw2)0.5·Sr2·H·W operations. The second ME after disparity compensation is 4·TMCTI. Finally, by including all the required operations for computing weights and average, the number of total operations is TMCTI+6·15·H·W+0.5[0.5·Sr2·H·W+4·TMCTI(11Bw2+7)·H·W]4·TMCTI, which is denoted as TCOMPETE. The above time complexity analysis shows that

Eq. (11)

TMCTI<TF-HOMO<TCOMPETE<TH-MVME.

Experiments show that the execution time of COMPETE is only half that of H-MVME while achieving the same SI confidence. The execution time for COMPETE is only four times that of MCTI.

4.2.5.

Practical execution time evaluation

The above time complexity analysis for different SI reconstruction methods is verified by practical execution time. All practical executions are implemented and executed on the same computer for fairness. The execution times of MDVC light encoder and H.264 encoder are first investigated.

Table 5 lists the average encoding time for one frame by MDVC, H.264 intra, H.264 inter no motion and H.264 inter, respectively. As shown, the MDVC light encoder spends about 5 to 15 times less than the others, which justifies the above time analysis. Table 6 lists the average execution time for reconstructing one SIF by MCTI, F-HOMO, H-MVME, and COMPETE, respectively. As shown, the average execution time for reconstructing one SIF of H-MVME is about eight times that of MCTI. For the COMPETE, this average execution time can be largely reduced for lower complexity videos. As the probability to process motion blocks in high complexity videos is high, the percentage of time reduction is limited, which is 1.29 to 2.56 times less than that of H-MVME. Table 7 lists the average turbo decoding time for different SI reconstruction methods. The performance of time reduction was evaluated based on the MCTI execution time for simplicity. Experiments showed that the decoding time would be reduced for higher SI confidence, which justifies that the proposed COMPETE can provide better SI than the others.

Table 5

The average time to encode one image (QCIF) in MDVC, H.264 with intra, inter no motion and intercoding mode (MSEC/FRAME) and CIF ones are provided for comparisons.

VideoEncoding time
Even frameGOP=12
MDVC QCIF (CIF)H.264 IntraH.264 Inter no motionH.264 Inter
Race16.70 (23.51)32.6773.39100.06
Ballroom6.06 (23.07)33.4671.9799.27
Breakdancer6.06 (22.87)29.8368.18106.38
Exit6.96 (26.42)30.1566.9290.12
Ballet5.42 (21.51)29.3665.9791.86
Vassar6.06 (22.93)31.5767.7189.02
Average6.17 (23.38)31.1769.0296.12

Table 6

The average time to construct one SIF (MSEC/FRAME).

VideoReconstruction time
SIF
MCTIF-HOMOH-MVMECOMPETE
Race164.1102.0498.7386.5
Ballroom62.8102.0496.8291.1
Breakdancer63.8100.4495.2312.5
Exit62.8100.8496.2242.3
Ballet62.8102.0499.4238.2
Vassar63.1102.4497.1193.9

Table 7

The average time saving of turbo decoding with different SI reconstruction methods as compared to MCTI.

VideoDecoding  Δ Time (%)
ΔTime(%)=T(SI method)−T(MCTI)T(MCTI)
F-HOMO (%)H-MVME (%)COMPETE (%)
Race11.9233.3237.45
Ballroom9.784.596.42
Breakdancer4.7417.7219.89
Exit13.643.3111.20
Ballet12.6111.3817.44
Vassar13.306.057.88

4.3.

Subjective performance evaluation

The subjective performance of different methods carried out on test videos is presented in this section. The QP control parameter of H.264 is set to be 26.

4.3.1.

Reconstructed Side Information Frames

The SIFs reconstructed by MCTI and F-HOMO demonstrate severe block artifacts, which can be smoothed by the proposed COMPETE and modified H-MVME. But the latter suffered block noise in low complexity videos due to performing regular interpolation and block matching that led to static block noises. The proposed COMPETE effectively eliminates this block noise through weighted compensation and prediction.

4.3.2.

Reconstructed Wyner–Ziv Frame

The SI confidence affects the reconstructed WZF quality. For one reconstructed I2tWZ by MCTI and F-HOMO, due to low SI confidence, many image blocks cannot be well recovered from low confidence SI. In comparison, the COMPETE and H-MVME yield higher SI confidence and hence higher quality for I^2tWZ. Although COMPETE and H-MVME demonstrate comparable PSNRs for I^2tWZ, the former consumed less computations. The resultant images are shown in Fig. 17. Reconstructed videos demonstrate that moving objects, cars and persons, are blurred from MCTI and F-HOMO based WZFs, while both COMPETE and H-MVME effectively eliminate this artifact for slow-motion videos, e.g., legs in Breakdancer.

Fig. 17

Subjective performance comparisons of reconstructed WZFs, whose KFs are encoded with H.264 intra at QP=26: (a) original; (b) MCTI; (c) F-HOMO; (d) H-MVME; and (e) COMPETE.

JEI_25_6_063022_f017.png

4.4.

Practical Applications

The WZ decoder combines the SI and the received parity bits to recover the original symbol. Additional parity bits would be requested if the original symbols cannot be reliably decoded. This request-and-decode process is repeated until an acceptable symbol error probability is reached.2 The rate control performed by the decoder can reduce encoder computational loading. This feedback also enables the decoder to flexibly control SI generation from simple to sophisticated approaches, which can help to adapt to different encoder applications. However, this feedback channel used as an interactive decoding procedure may also hinder practical applications that require independent encoding and decoding. Instead of adopting this “decode-and-request” procedure, the decoder could be implemented with a correlation estimation algorithm, in which the rates of previously reconstructed frames are used to predict the required rates sent to the encoder. Feedback free45 and unidirection DVC46 have been proposed to make decoder operations independent of those of the encoder.

5.

Conclusions

For a MVC that adopts DVC coding, MDVC, we proposed to utilize interview video correlations and exploit bit value probability distribution of transform coefficients under the block-DCT video codec framework to improve the SIF confidence and accuracy of decoded bits while speeding up the decoder rate control process. Contributions of this paper comprise (1) for specific multiview video applications, such as wireless video sensor and wireless video surveillance networks, the proposed MDVC utilizes the advantage of a DVC and multiview video framework to enable efficient and low complexity video encoding. Simulations verified that the MDVC can reduce encoding complexity to at least five times smaller than H.264/INTRA while enhancing the quality of reconstructed WZFs. (2) To improve the MDVC decoding performance, a multiview SI generation algorithm, COMPETE, was proposed to improve the quality of reconstructed SIF and WZFs. Both temporal correlation among intraview images and disparity correlations among interview images were well utilized to enhance WZF reconstruction. Simulation results showed that the PSNRs of reconstructed WZFs by COMPETE are 0.5 to 3.8 dB higher than those by MCTI when encoding low to high complexity videos. (3) To improve the MDVC rate control performance, we exploit the probability distribution of transform coefficient bits and reorder the transmission priorities of DCs and ACs, such that the turbo decoder would request the fewest bits to decode the WZF. Simulations demonstrate that the PSNRs of decoded WZFs are 0.2 to 3.5 dB higher than those encoded with H.264/INTRA under the same bit rates.

The COMPETE also outperformed H-MVME with 0.15 to 2.93 dB higher image PSNRs, in which the H-MVME outperforms MVME with 0.5 to 1 dB higher PSNR. Besides, the COMPETE effectively reduced the computation complexity, which is 1.29 to 2.56 times smaller than other SI reconstruction methods on average. Some recent research on video coding focus on free-view video codec and transmission. The proposed SI reconstruction method, COMPETE, under the MDVC framework can be extended to enhance the performance of free-view video codec that has to handle dynamic and mobile encoders and view reconstruction, which are considered as our future research. The COMPETE can also be carried out with a pixel-level disparity model. In addition, how to embed a small amount of information at the encoder22 to improve the decoding efficiency, together with the pixel-level disparity model, are also considered as our future research.

Appendices

Appendix:

Linear Minimum Mean Squared Error

The LMMSE predictor is carried out to compute the wj for a MC block Bi(I,v) with four observations and can be represented as

Eq. (12)

Ei[e2]=Ei{[Bi(I2t)Bi(I^2tint)]2|BiI2t}=Ei[(xij=14wjx^ij)2|xiI2t],
where xi and x^ij denote Bi(I2t) in the original WZF and Bi(I^2tint) in the reconstructed SIF, respectively. To minimize Ei[e2], it takes its first derivative as 0, i.e., Ei[e2]wj=0:

Eq. (13)

Ei[x^ij·(xij=14wjx^ij)]=0,orj=14wjRx^ijx^ij=Rxix^ij.
The optimal weights, w=[w1w2w3w4]T, can be calculated through w=Rx^ijx^ij1Rxix^ij. This procedure can be carried out entirely at the encoder for higher accuracy but it conflicts with the design target of light encoding. For practical applications, as different videos demonstrate different MVs and the original WZF, I2t, is not available at the decoder, the I2t can be replaced by the MCTI frames, which are interpolated from I^2t1 and I^2t+1 at the decoder. The LMMSE predictor in Eq. (13) is utilized for the MC to yield optimal weights for individual blocks reached with MV, vmj, instead of assigning the heaviest weight, wj=1, for the block with the minimum SAD. Since only lossy reconstructed KFs are available at the decoder and the block with a MV of minimum SAD cannot always promise a best matched block. When KFs compression ratios are different, the MV prediction results will also be different and unstable. This optimally weighted MC effectively exploited interview disparity correlation for assigning different weights for blocks with different MEs, which can prevent block-based full search from trivial/unstable matching and increase prediction accuracy. Experiments showed that assigning weights obtained from the LMMSE estimator can improve the SIF PSNR up to 0.1 dB and 0.3 to 0.4 dB for low and medium-to-high complexity video, respectively, as compared to that with weights proportional to block fidelity [Eq. (2)]. The PSNR improvement would depend on the accuracy of the four MVs, {vmj}j=1,,4, which would be degraded when encoding higher complexity videos. Under this condition, the difference among the four MVs would be enlarged, and the LMMSE estimator can help to yield stable weights for fusion blocks with different MEs. For low complexity videos, both the LMMSE estimator and normalized fidelity-based weighting strategy demonstrated comparable performances.

Acknowledgments

This work is partially supported by the Taiwan Ministry of Science and Technology with Grant No. MOST 105-2221-E-011-116 and Taiwan Building Technology Center with Grant No. IBRC 105H451709.

References

1. 

T. W. A. Vetro and G. Sullivan, “Overview of the stereo and multiview video coding extensions of the h.264/MPEG-4 AVC standard,” 626 –642 (2011). http://dx.doi.org/10.1109/JPROC.2010.2098830 Google Scholar

2. 

A. M. A. B. Girod and S. D. Rebollo-Monedero, “Distributed video coding,” Proc. IEEE, 93 71 –83 (2005). http://dx.doi.org/10.1109/JPROC.2004.839619 Google Scholar

3. 

A. D. L. Z. X. Ziong and S. Cheng, “Distributed source coding for sensor networks,” IEEE Signal Process. Mag., 21 (5), 80 –94 (2004). http://dx.doi.org/10.1109/MSP.2004.1328091 ISPRE6 1053-5888 Google Scholar

4. 

M. Flierl and B. Girod, “Coding of multi-view image sequences with video sensors,” in Int. Conf. Image Processing, 609 –612 (2006). Google Scholar

5. 

X. Guo and Y. Lu, “Distributed multiview video coding,” Proc. SPIE, 6077 60770T (2006). http://dx.doi.org/10.1117/12.642989 Google Scholar

6. 

D. Slepian and J. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Inf. Theory, 19 471 –480 (1973). http://dx.doi.org/10.1109/TIT.1973.1055037 IETTAW 0018-9448 Google Scholar

7. 

A. Wyner and J. Ziv, “The rate-distortion function for source coding with side information at the decoder,” IEEE Trans. Inf. Theory, 22 (1), 1 –10 (1976). http://dx.doi.org/10.1109/TIT.1976.1055508 IETTAW 0018-9448 Google Scholar

8. 

ISO, “Information technology-coding of audio-visual objects-part 10: advanced video coding,” (2004). https://www.cmlab.csie.ntu.edu.tw/~cathyp/eBooks/14496_MPEG4/iso14496-10.pdf Google Scholar

9. 

R. Z. A. Aaron, S. Rane and B. Girod, “Wyner-Ziv coding for video: applications to compression and error resilience,” in Proc. IEEE Data Compression Conf., 93 –102 (2003). http://dx.doi.org/10.1109/DCC.2003.1194000 Google Scholar

10. 

C. Z. Y. Cao, S. Gao and G. Qiu, “Towards practical distributed video coding for energy-constrained networks,” Chin. J. Electron., 25 (1), 121 –130 (2016). http://dx.doi.org/10.1049/cje.2016.01.019 CHJEEW 1022-4653 Google Scholar

11. 

C. Yeo and K. Ramchandran, “The theory of a general quantum system interacting with a linear dissipative system,” Ann. Phys., 19 (4), 995 –1008 (2010). http://dx.doi.org/10.1109/TIP.2009.2036715 Google Scholar

12. 

F. D. M. Ouaret and T. Ebrahimi, “Iterative multiview side information for enhanced reconstruction in distributed video coding,” EURASIP J. Image Video Process., 2009 591915 (2009). http://dx.doi.org/10.1155/2009/591915 Google Scholar

13. 

E. A. X. Artigas and L. Torres, “Side information generation for multiview distributed video coding using a fusion approach,” in Proc. of Nordic Signal Processing Symp., 250 –253 (2006). Google Scholar

14. 

D. Kubasov, J. Nayak and C. Guillemot, “Optimal reconstruction in Wyner-Ziv video coding with multiple side information,” Proc. Multimedia Signal Process. (MMSP) Workshop, 183 –186 (2007). http://dx.doi.org/10.1109/MMSP.2007.4412848 Google Scholar

15. 

F. D. M. Ouaret and T. Ebrahimi, “Fusion-based multiview distributed video coding,” in Proc. of ACM Video Surveillance and Sensor Networks, 139 –144 (2006). Google Scholar

16. 

Y. W. H. Yin, M. Sun and Y. Liu, “Fusion side information based on feature and motion extraction for distributed multiview video coding,” in Visual Communications and Image Processing Conf., 414 –417 (2014). Google Scholar

17. 

M. C. T. Maugey, W. Miled and B. Pesquet-Popescu, “Fusion schemes for multiview distributed video coding,” in Signal Processing Conf., 559 –563 (2009). Google Scholar

18. 

F. Dufaux, “Support vector machine based fusion for multi-view distributed video coding,” Int. Conf. Digital Signal Process. (DSP), 1 –7 (2011). http://dx.doi.org/10.1109/ICDSP.2011.6005004 Google Scholar

19. 

S. Shimizu et al., “Improved view interpolation for side information in multiview distributed video coding,” in Int. Conf. on Distributed Smart Cameras, 1 –8 (2009). Google Scholar

20. 

G. Petrazzuoli et al., “Novel solutions for side information generation and fusion in multiview dvc,” EURASIP J. Adv. Signal Process., 2013 154 (2013). Google Scholar

21. 

S. Shimizu and H. Kimata, “View synthesis motion estimation for multiview distributed video coding,” in European Signal Processing Conf., 2057 –2061 (2010). Google Scholar

22. 

M. Makar et al., “Quality-controlled view interpolation for multiview video,” in Int. Conf. Image Processing, 1805 –1808 (2011). Google Scholar

23. 

F. Pereira, J. Ascenso and C. Brites, “Studying the GOP size impact on the performance of a feedback channel-based Wyner-Ziv video codec,” 801 –815 (2007). http://dx.doi.org/10.1007/978-3-540-77129-6_68 Google Scholar

24. 

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, 60 (2), 91 –110 (2004). http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94 IJCVEQ 0920-5691 Google Scholar

25. 

R. C. Bolles and M. A. Fischler, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, 24 (6), 381 –395 (1981). http://dx.doi.org/10.1145/358669.358692 Google Scholar

26. 

E. A. X. Artigas and L. Torres, “A comparison of different side information generation methods for multi-view distributed video coding,” in Proc. SIGMAP, 450 –455 (2007). Google Scholar

27. 

A. S. Barbulescu and S. S. Pietrobon, “Rate compatible turbo codes,” Electron. Lett., 31 535 –536 (1995). http://dx.doi.org/10.1049/el:19950406 Google Scholar

28. 

D. N. Rowitch and L. B. Milstein, “On the performance of hybrid FEC/ARQ system using rate compatible punctured turbo (RCPT) codes,” IEEE Trans. Commun., 48 (6), 948 –959 (2000). http://dx.doi.org/10.1109/26.848555 Google Scholar

29. 

C. Brites and F. Pereira, “Correlation noise modeling for efficient pixel and transform domain Wyner-Ziv video coding,” IEEE Trans. Circuits Syst. Video Technol., 18 (9), 1177 –1190 (2008). http://dx.doi.org/10.1109/TCSVT.2008.924107 Google Scholar

30. 

M. Salmistraro et al., “Joint disparity and motion estimation using optical flow for multiview distributed video coding,” in European Signal Processing Conf., 286 –290 (2014). Google Scholar

31. 

N. Deligiannis et al., “Side-information-dependent correlation channel estimation in hash-based distributed video coding,” IEEE Trans. Image Process., 21 (4), 1934 –1949 (2012). http://dx.doi.org/10.1109/TIP.2011.2181400 IIPRE4 1057-7149 Google Scholar

32. 

P. Márquez-Neila et al., “Improving RANSAC for fast landmark recognition,” in Proc. Computer Vision and Pattern Recognition Workshop, 1 –8 (2008). Google Scholar

33. 

K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Trans. Pattern Anal. Mach. Intell., 27 (10), 1615 –1630 (2005). http://dx.doi.org/10.1109/TPAMI.2005.188 ITPIDJ 0162-8828 Google Scholar

34. 

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed.Cambridge University Press, England (2004). Google Scholar

35. 

C. B. J. Ascenso and F. Pereira, “Content adaptive Wyner-Ziv video coding driven by motion activity,” in Proc. IEEE Int. Conf. on Image Processing, 605 –608 (2006). http://dx.doi.org/10.1109/ICIP.2006.312408 Google Scholar

36. 

A. Z. P. H. H. Torr, “MLESAC: a new robust estimator with application to estimating image geometry,” Comput. Vision Image Understanding, 78 (1), 138 –156 (2000). http://dx.doi.org/10.1006/cviu.1999.0832 CVIUF4 1077-3142 Google Scholar

37. 

M. F. D. Varodayan, D. Chen and B. Girod, “The theory of a general quantum system interacting with a linear dissipative system,” EURASIP Signal Process. Image Commun., 23 (5), 369 –378 (2008). http://dx.doi.org/10.1016/j.image.2008.04.009 Google Scholar

38. 

E. S. A. Aaron, S. Rane and B. Girod, “Transform-domain Wyner-Ziv codec for video,” Proc. SPIE, 5308 520 –528 (2004). http://dx.doi.org/10.1117/12.527204 Google Scholar

39. 

K. Sayood, “Introduction to data compression,” (2012). Google Scholar

40. 

K. L. D. Kubasov and C. Guillemot, “A hybrid encoder/decoder rate control for Wyner-Ziv video coding with a feedback channel,” in IEEE Workshop on Multimedia Signal Processing, 251 –254 (2007). http://dx.doi.org/10.1109/MMSP.2007.4412865 Google Scholar

41. 

S. K. Y. Vatis and J. Ostermann, “Inverse bit plane decoding order for turbo code based distributed video coding,” in IEEE Int. Conf. on Image Processing, 1 –4 (2007). http://dx.doi.org/10.1109/ICIP.2007.4379077 Google Scholar

42. 

F. D. M. Ouaret and T. Ebrahimi, “Mulitiview distributed video coding with encoder driven fusion,” in Proc. of European Signal Processing Conf., (2007). Google Scholar

43. 

ISO, “The theory of a general quantum system interacting with a linear dissipative system,” (2005). ftp://ftp.merl.com/pub/avetro/mvc-testseq/ Google Scholar

44. 

J. J. Chen et al., “A multiple description video codec with adaptive residual distributed coding,” IEEE Trans. Circuits Syst. Video Technol., 22 (5), 754 –768 (2012). http://dx.doi.org/10.1109/TCSVT.2011.2179459 Google Scholar

45. 

J. L. Martinez, “Feedback free DVC architecture using machine learning,” in Proc. IEEE Int. Conf. Image Processing, 1140 –1143 (2008). http://dx.doi.org/10.1109/ICIP.2008.4711961 Google Scholar

46. 

W. A. C. F. M. Badem and A. M. Kondoz, “Unidirectional distributed video coding using dynamic parity allocation and improved reconstruction,” in Int. Conf. Info. Automation for Sustainability, 335 –340 (2010). Google Scholar

Biography

Shih-Chieh Lee received his PhD from the National Taiwan University of Science and Technology in 2013 in electrical engineering. He is currently working at Nokia Networks as a network planning and optimization engineer. His research interests include image/video processing and the related topics in multimedia communications.

Jiann-Jone Chen received his PhD from the National Chiao-Tung University in 1997 in electronic engineering. He was a researcher with the Advanced Technology Center, Information and Communications Research Laboratories, Industrial Technology Research Institute (ITRI), Hsinchu. He is currently an associate professor in the Electrical Engineering Department of National Taiwan University of Science and Technology. His research interests include image/video processing, cloud video processing/streaming, image retrieval, and several topics in multimedia communications.

Yao-Hong Tsai received his PhD in information management from the National Taiwan University of Science and Technology (NTUST), Taipei, Taiwan, in 1999. He was a researcher with the Advanced Technology Center, Information and Communications Research Laboratories, Industrial Technology Research Institute (ITRI), Hsinchu. He is currently an associate professor with the Department of Information Management, Hsuan Chuang University, Hsinchu. His current research interests include image processing, pattern recognition, and computer vision.

Chin-Hua Chen received his MSEE degree from the National Taiwan University of Science and Technology in 2010. He is an engineer with the alpha network since 2012. His research interests comprise image/video processing, coding, and channel coding.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Shih-Chieh Lee, Jiann-Jone Chen, Yao-Hong Tsai, and Chin-Hua Chen "Toward enhancing the distributed video coder under a multiview video codec framework," Journal of Electronic Imaging 25(6), 063022 (20 December 2016). https://doi.org/10.1117/1.JEI.25.6.063022
Received: 13 July 2016; Accepted: 17 November 2016; Published: 20 December 2016
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Video

Computer programming

Error control coding

Video coding

Video surveillance

Actinium

Motion estimation

RELATED CONTENT

Distributed multi-view video coding
Proceedings of SPIE (January 19 2006)
Transform-domain Wyner-Ziv codec for video
Proceedings of SPIE (January 18 2004)
Wyner-Ziv residual coding for wireless multi-view system
Proceedings of SPIE (January 29 2007)

Back to Top