|
1.IntroductionMultiview video codec (MVC) design becomes popular,1 based on which wide-spread applications, such as three-dimensional (3-D) video, free-viewpoint television (FTV), and video surveillance networks, can be developed. The 3-D video provides high quality and immersed multimedia entertainment that can be experienced through various channels, including movies, TV, internet, and so on. The FTV is a MVC system that allows viewpoint switching among different viewpoints, in which the video scene is captured by the camera from a specific view angle. For video surveillance networks, the MVC can be used to monitor and detect unusual events/objects. However, the MVC requires intersensor communication, which is expensive and not feasible in some applications. The information amount and required computational loading for a MVC codec is very large, compared to those of monoview. How to efficiently process and compress multiview videos is challenging. The joint video team has been working on the MVC, which captures videos from different video cameras and encodes these signals with reference to each other to yield a single bitstream. To enhance codec performance, most MVC schemes exploit correlations between both intraview and interview frames. At the encoder, it performs block motion compensation (MC) and disparity estimation to remove correlations between images along the intraview/temporal and interview video dimension to achieve high compression efficiency. Under this MVC framework, the time complexity of encoding operations would be high for efficient compression. It cannot provide low complexity encoding for applications like wireless video sensor/surveillance networks and low-power MVC capturing devices. The coding complexity has to be shifted to the decoder to make these applications feasible. The distributed video coder (DVC)2,3 was proposed to effectively shift coding complexity to the decoder, which can capture and encode signals from several low-power devices independently and jointly decode these signals. It can be extended to deal with multiview video signals,4,5 in which the disparity information among images of different views can be exploited for removing correlations, in additional to correlations among intraview images. The DVC2 was developed based on lossless distributed source coding, also known as the Slepian–Wolf coder (SWC)6 for lossless coding. An important aspect of the SWC is that separated encoding can theoretically achieve the same compression ratio with joint encoding as long as the correlations among data streams are exploited by a joint decoder. This SWC framework was extended to process lossy compression with side information (SI) at the decoder,7 as in the case of the Wyner–Ziv (WZ) coder. With the WZ coding algorithm, the DVC treats video compression as a channel coding problem. The input video of DVC is decomposed into odd and even sequences, in which the former is encoded as key frames (KFs) and the latter WZ frames (WZFs). The KFs are encoded with H.264/AVC8 intramode, H.264/INTRA, and the WZFs are block-transformed, quantized, and transmitted through error correction codes in a bit-plane by bit-plane approach, in which only part of the parity bits are transmitted. At the decoder, the KFs are utilized to yield the SI a noisy WZF, which is the systematic part of an error correction code that co-operates with the received parity bits to correct channel errors. Compared to current video codec, the DVC effectively shifts a considerable amount of the coding complexity from the encoder to the decoder, which can also be applied to error resilience control9 that treats the side information frame (SIF) as additional reference information, SI, to correct channel errors. Recently, a new distributed video codec based on modulo operation in the pixel domain has been proposed,10 which demonstrates lower decoding complexity. Integrating the MVC with a multiview distributed video coding (MDVC) would allow encoding several low-power capturing and encoding devices independently and decode these signals jointly. A view-synthesis and disparity-based correlation model that exploits interview video correlation is proposed to deliver error-resilient video in a distributed multicamera system.11 One simple MDVC example with a left-, a right-, and a central-view camera is shown in Fig. 1. The left- (L) and right-view (R) videos are encoded and decoded by the traditional video codec, e.g., H.264/INTRA, to act as KFs ( frames) for the DVC decoding. The central-view video is encoded as interleaved one intra () and one WZF, i.e., group of picture . At the decoder, the SI for a WZF can be estimated by exploiting the intraview and interview image correlations, respectively. The decoded KFs are utilized to jointly reconstruct the WZFs, , based on inter- and intraview image correlations. These correlations are utilized by assigning weights to different estimated motion vectors (MVs) exploited based on the MDVC framework. This decoder-driven fusion method is adopted to improve the codec performances, e.g., peak signal to noise ratios (PSNRs) and time complexity. In addition, the embedded DVC makes it feasible to setup low complexity, mobile encoders for multiview video acquisition to enable low delay and real-time processing of the MDVC. The decoder can consume the shifted computational complexity by setting a high performance computer for central decoding, e.g., large buffers, disk array, and high-speed CPUs. Researches on improving MDVC SIF quality can be found by many.12–14 An iterative SIF generation method uses decoded WZF to refine the SIF,12 based on which the second iteration can enhance the quality of decoded images. By performing interpolation along intra- and interview video dimensions, respectively, to yield candidate SIFs, the final SIF can be fused from these candidate SIFs with a specific reliability measurement.13 The interview interpolated candidate SIF for fusion can be enhanced by using a perspectively transformed one,15,16 which can help to fuse better final SIFs and demonstrate better coding performance, as compared to monoview DVC. Three new fusion techniques that exploit signal properties of neighboring residual frames along intra- and interview direction were proposed for robustness and improving SIF quality.17 The fusion can also adopt a support vector machine to identify a set of features for classifying pixels into either the temporal or the disparity class, by which the fusion can yield better SIF.18 It provides a good solution for fusing intra- and interview predictions. However, these fusion methods suffer from performance degradation due to low temporally predicted quality and irregular video motion. An adaptive filtering view interpolation method19,20 was proposed to minimize the difference between SIF and decoded KF, which can compensate for the intercamera mismatches and improve SIF quality. When occlusion exists between interview videos, the temporal frame interpolation is utilized to compensate for the deficiency of interview linear fusion20 to improve SIF quality. Various SI generation methods are evaluated and compared for better utilization efficiency. By estimating motion on interpolated frames, the irregular motion artifacts can be eliminated and the SIF quality can be improved.21 One MDVC codec22 was designed to transmit a small amount of error control information to replace an untransmitted frame and the information is obtained from a low-dimensional blockwise projection of the frame, i.e., mean-based projection. The most prominent feature of this work is that it is performed as a postprocessing step after decoding and interpolating the received video, which allows easy integration with various video transmission systems. In the conventional video codec, it usually adopts the coding structure with a GOP size larger than 15, , to yield good enough rate-distortion (RD) performances. For the MDVC, the GOP size is usually set to be smaller in that, for the WZ codec to adopt longer GOP sizes, performing ME becomes difficult and less reliable such that the reconstructed SIF quality would be degraded. Previous research23 investigates the rate-distortion and complexity performance of the feedback-channel based WZ codec as a function of the GOP size and justifies that the lowest encoder complexity, e.g., , yields the best RD performance, as compared with the conventional video codec. For the MDVC, the coding structure with is adopted for simplicity and efficiency. Under the MDVC framework, we proposed to process static and nonstatic image regions with different procedures. By exploiting correlations between images along inter- and intraview dimensions, the proposed weighted block-matching prediction (BMP) can yield higher SIF quality. This proposed categorized block matching prediction with fidelity weights method is abbreviated as COMPETE. At the decoder, the scale-invariant feature transform (SIFT)24 was adopted to find stable key feature points in the first decoded KF images, , , and , which are used for matching correspondent features among interview video images to estimate the homography matrices, and , through a RANSAC25 algorithm. The SIFT processing time is analyzed to be proportional to image size. The and are estimated once at the decoder to perspectively transform side-view images to be with central view. The homography matrix can also be estimated with a regular time interval or dynamically according to scene foreground/background change. In the proposed COMPETE algorithm, image blocks are categorized into motion, no-motion, and outlier blocks, with which blocks are processed in different ways. For motion blocks, with both perspectively transformed, and , and reconstructed central-view images, , at the decoder, the block MC procedure can then be performed between adjacent images from these transformed and central-view ones to yield MVs. By combining blocks reached by these MVs with weights proportional to block fidelity, it would generate more smooth and higher quality SIFs. For no-motion blocks, the current block is compensated by the co-located block in the previous frame. For blocks residing on the outlier, resulting from perspective transformation, temporal bidirectional MC is performed between central-view image, and . The proposed COMPETE algorithm helps to improve the SI confidence and the quality of decoded WZF, , for the MDVC system. The COMPETE also effectively decreases computational load while achieving comparable PSNR performances with other SIF reconstruction methods, e.g., MVME26 and H.264/INTRA. For rate control of the MDVC channel coding, the turbo codec is designed to let the decoder receive just enough parity bits from the encoder for signal reconstruction. The rate compatible punctured turbo (RCPT) code is adopted for the MDVC channel coding, which was initialed from unequal error protection for unstable transmission.27 An automatic repeat request (ARQ) rate control method was developed under RCPT28 to transmit fewest parity bits for successful decoding. For the turbo decoder to reference more reliable prior probabilities to reduce its iteration times and improve decoding efficiency, the correlation of DCTs between the original and its SIF is modeled as Laplacian distribution.29 Different puncture patterns were designed for direct and alternate current coefficients, DCs and ACs, to yield the parity bits, based on which the correlation between bit-planes is exploited and utilized to estimate the posteriori probability to provide the priori probability for turbo decoding. Simulations verified that the turbo decoding time can be reduced to 37% as compared to other SIF generation methods. In what follows, SIF reconstruction methods developed based on the MDVC system and the proposed COMPETE methods are described in Sec. 2. The proposed rate control algorithm to improve the MDVC performance is described in Sec. 3. Section 4 is the simulation study. Section 5 concludes this paper. 2.Multiview Distributed Video Coding Side InformationFor one MDVC with , half of central-view images are encoded as WZFs and the SIF quality at the decoder would dominate the WZ codec performance. The SIF at the decoder can be considered as a reconstructed image of the original WZF at the encoder transmitted through noise channels. If the SIF quality is high enough, fewer parity bits will be requested during decoding and higher codec efficiency can be achieved. In a monoview video codec, the general approach to yield SIF is performing temporal interpolation/extrapolation from KFs to yield SIF, and there are other approaches adopting motion compensated interpolation to improve SIF quality, such as using an optical flow predictor30 and hash-based estimator.31 For the MVC, the same scene is captured from different viewing angles by different cameras, such that the correlation among different view videos can be utilized for SIF generation. Under the MDVC framework, we proposed to utilize the SIFT24 feature extraction and the RANSAC25,32 algorithm to exploit feature correspondences among interview video images. The SIFT outperforms other feature descriptors on images with real geometric and photometric transformations,33 and the RANSAC helps to robustly fit a model to data in the presence of outliers, based on which the homography matrices34 can be estimated for perspective transform from side-view video to central view. The proposed BMP algorithm can then be carried out to yield high quality SIF and improve the quality of decoded WZF. Different SIF reconstruction methods developed based on the MDVC framework, such as motion compensated temporal interpolation (MCTI),35 MVME,26 and hybrid-MVME (H-MVME), will be first reviewed for performance comparisons in the following sections. 2.1.Side Information ReconstructionThe MCTI35 is an image reconstruction/interpolation method, in which block ME and MC are utilized to explore temporal correlation of monoview videos. To interpolate for the current frame, , the MVs estimated from its previous frame and the next frame are halved for bidirectional MC to yield the interpolated SIF, . The MVME scheme26 carried out at the decoder is shown in Fig. 2, in which KFs, , are coded with H.264/INTRA and the WZF is to be reconstructed with its SIF. For one WZF, two ME paths can be adopted: the inner path is estimated by performing disparity vector estimation followed by MV estimation, as demonstrated by Fig. 3(a); the outer path can be obtained by reversing the above two vector estimation procedures, as shown in Fig. 3(b). To interpolate for each block with pixels in the WZF, let the side-view image at time , , be the target image, in which a best matched block, with a disparity vector, , corresponding to the co-located block in the central-view image, , is found. The best matched block in is then used to find out another best matched block from with a MV . This procedure would yield one reference , or one inner path MV, for the co-located block in the current WZF. By applying the same procedure to the other three sets of reference images, three other inner path MVs can be found for the current block in the WZF. The outer path MVs can be obtained by the same procedure but with MV estimation first and then disparity vector estimation. When all ME paths of the WZF are included, i.e., four inner and four outer paths to perform MVME, it yields eight estimated frames. This SIF can be reconstructed by taking the weighted/nonweighted average of these corresponding blocks of estimated MVs. Although the MVME provides several estimated MVs for reference, it suffers from heavy computation. In addition, it may lead to trivial estimation errors for no-motion blocks. The MVME approach utilizes the general ME operations, designed for intraview video, to estimate disparity vectors among interview images. To bridge this inherent gap between and estimation, we proposed to estimate the homography matrix to perspectively transform the side-view video to be with central view such that applying ME on interview images would be perfect. This H-MVME approach can yield better PSNR performances than MVME. In addition to handling the MVME in the hybrid approach, we proposed to eliminate trivial ME operations for no-motion blocks and perform BMP based on calculating the weighted sum of MC blocks reached through different MVs, denoted as COMPETE as described above, to improve the MVME to yield high quality SIF. In case the disparity/MV estimation was operated on outlier, i.e., regions without correspondent pixels resulting from performing perspective transformation, the temporal MCTI is adopted to interpolate for the current block in the WZF. 2.2.COMPETE Side Information ReconstructionThe COMPETE SIF reconstruction method is proposed to enhance the H-MVME to yield SIF with higher confidence. When homography matrices are not available for perspective transformation, we utilize the SIFT feature extraction and the RANSAC procedure to estimate homography matrices and then utilize BMP to yield high confidence SIF. 2.2.1.HomographyThe homography relates the pixel coordinates in two images. When it is applied to every pixel, the new image is a warped version of the original one. However, this homography relationship is independent of the scene structure. To be more specific, one homography matrix, , which is , can transform one camera view to another.34 To estimate the , the SIFT24 algorithm is first applied on the video images of different views, , and , to find stable key feature points. Tentative feature point pairs between two images are selected to provide candidate homography matrices, . The feature point pairs and candidate are iteratively selected and justified by finding the maximum consensus set through the RANSAC procedure to yield the best . At this stage, it seeks to find all correspondent SIFT points, or matching pairs, between two different view images. Mismatches will occur in that the matching process assumes proximity and similarity, and there are some correspondence located in outliers. In general, the RANSAC outperforms gradient descent methods36 in that too many outliers will prevent the latter from converging to the global optimum. 2.2.2.Scale-invariant frame transformThe SIFT24 procedure helps to represent one image with robust feature points. It transforms one image into scale-invariant feature coordinates corresponding to local features. This procedure would ignore low contrast feature points and eliminate edge response to filter out the remaining stable keypoints. 2.2.3.Interpolation and homographyThe SIF at the turbo decoder is generated by the “interpolation/homography” module, as shown in Fig. 4. We proposed to exploit correlations among interview images, in addition to intraview ones, to eliminate reference SIFs from having severe disparity. The reference central-view images can be obtained through the homography matrices, and , from left- and right-view images. To estimate the and , the first intracoded frames, , , and , received and reconstructed at the decoder, are used as sample images to extract correspondent stable SIFT features between left/right-view and central-view images. To estimate the homography matrix based on the correspondent feature points, the RANSAC procedure was carried out to find the matrices, and , which yielded maximum inliers. The reference central-view images can then be obtained by performing perspective transform through and from the decoded left- and right-view images, and , i.e., and , as shown in Fig. 5(a). With the reference central-view images, the BMP procedure can be carried out to yield the SIF, . For one multiview video, the homography matrix that transforms the side-view video to be with central view has to be estimated only once with reference to at the beginning of decoding. With the homography matrix estimated optimally through the SIFT and the RANSAC procedures, the BMP among and , and the original decoded one are performed to yield the SIF, described in the following section. 2.3.Block Matching PredictionPerforming perspective transformation from side view to central-view frames will result in an outlier, miss transformed area, as shown in Fig. 5(a). The perspectively transformed images, and , and the reconstructed central-view images, , are used to perform block matching to estimate disparity and MVs, denoted as and , respectively. The SIF of a central-view image not transmitted can be reconstructed through weighted motion compensated prediction by above and , in which the latter were estimated from . This BMP process would reconstruct the SIF, , shown in Fig. 5(b), where is the block in , and are the disparity and MVs estimated between reconstructed interview images, e.g., and , and between , respectively. The COMPETE flowchart is shown in Fig. 6. One is partitioned into blocks, , and a large block consists of blocks, i.e., , in which is the current block, i.e., . The four block MVs in , , are obtained by performing motion estimation (ME) between and for the co-located . If , it means in is a no-motion block and can be reconstructed by direct copy from its previous image, i.e., . If , then is a motion block and the corresponding disparity block in side-view transformed images, and , and ’s MVs are combined with weights proportional to block fidelity to yield a more accurate compensated block for the in . We take the ME process for a by referencing left- and central-view images as an example and the right-view one can be carried out in the same way. The first-phase block disparity estimation is performed between and , denoted as , which will yield the best matched block from with a . If the best matched block does not reside on the outlier of , the second-phase block ME is performed, in which the search range in is two blocks wide along vertical and horizontal directions and centered at the co-located coordinate of on with the offset . It yields one , and the second can be obtained by the same procedure . The other two MVs, and , are estimated from the right-view video through the same procedure. When performing MC for an , if any image block reached through the inner-path MV, , resides on the outlier, then its is set zero. Let denote the image block obtained from the co-located block on an with its MV, , and the reconstruction for the SIF, , can be represented as where the first term yields the weighted central-view image by utilizing MVs of and the second one from . In general, the should be proportional to the normalized fidelity of the corresponding best matched block with respect to the co-located blocks in central view. The for one MC block reached through can be computed as in which denotes the sum of absolute distortion whose reciprocal can be used as block fidelity. On the other hand, if all matched blocks reside on the outlier, there would be no prediction result that can satisfy the assumed scenario. Under this condition, only the reference MV, , estimated between the two reconstructed central-view images, and , can be used to predict the SIF. The bidirectional MC is used to reconstruct the block of the SIF: To further yield the optimal weight for a MC block , the linear minimum mean squared error (LMMSE) estimator can be adopted. How to compute the LMMSE weights, , is described in the Appendix. Experiments showed that adopting LMMSE weights can improve the SIF PSNR up to 0.1 and 0.3 to 0.4 dB for low and medium-to-high complexity videos, respectively, compared to those adopting weights proportional to block fidelity presented by Eq. (2).In our experiments, the COMPETE is operated under the frame ratio , while the fusion-based homography method is . The COMPETE can also be adapted to operate under the ratio . In the COMPETE, it needs to transmit the first KF of each view to estimate homography matrices, as shown in Fig. 7(a), and there are one MV and two disparity vectors that can be used to interpolate for the SIF of . To interpolate for the SI of side-view images, say , only one MV and one disparity vector can be referenced, as shown in Fig. 7(b). For the last central-view image, only two disparity vectors can be referenced to interpolate for its SI, as shown in Fig. 7(c). When the WZF/KF ratio is larger than 1, it requires learning-based approaches37 that apply an expectation maximization algorithm for unsupervised learning of MVs. 3.Multiview Distributed Video Coding Rate Control AlgorithmThe internal signal processing flow of the MDVC (Fig. 1) is shown in Fig. 8. The encoder comprises both H.264 and WZ encoders, in which the left- and right-view images, and , would be encoded by the former to yield KF bitstreams, and , respectively. The central-view images, , are separated into odd and even image sequences, and . The odd images are encoded by H.264 Intra to provide the KF bitstream and the even ones by the WZ encoder with appended cyclic redundancy check (CRC) checksum to yield parity bits . For adaptive rate control, the RCPT28 code is adopted for channel coding, because it performs near the Shannon limit at low SNR, while providing excellent throughput at high SNR.28 The WZ encoder will determine whether to send more parity bits or not based on the feedback requested bits NAK from the WZ decoder. The decoder comprises one H.264 decoder, one WZ decoder, and one interpolation/homography function module. The received bitstreams, , , and , will be decoded by the H.264 decoder to yield reconstructed images of left-, right-, and central-view odd images, , , and , respectively. They are inputs of the interpolation/homography modules that will reconstruct the SI, an interpolated central-view image , for the WZ decoder to reconstruct with reference to . The multiplexer combines the reconstructed and to yield the final central-view video . 3.1.Wyner–Ziv CodingThe WZ encoder in the MDVC system is shown in Fig. 9. The input image, , is divided into blocks with , which are then transformed to frequency domain coefficients, , through , and quantized through to yield the quantized coefficients, . To reduce encoding complexity, the integer DCT is adopted for low complexity hardware implementation. In , the DC coefficient comprises most of the block signal energy and will be allocated more bits than other higher frequency ones, ACs. Coefficients in the block, , are partitioned into different bands. Each coefficient band is uniformly quantized with a level quantizer (), where denotes the number of bits assigned to the ’th coefficient. The number of quantization levels, , for a DCT coefficient block38 is determined through an optimal bit allocation procedure on the coefficients. In practical implementation, the quantization stepsize of the ’th coefficient, , was setup with a loading factor, , for a certain coefficient probability density function (PDF),39 i.e., After quantization, each coefficient is represented by its quantization index . For simple demonstration, the parity bits generating process for one image is provided. The image is decomposed into sixteen blocks on which DCT is performed, and the number of bits to represent the quantized indexes of DCs and ACs are 4 and 3, respectively. The DCs and ACs of these sixteen DCT blocks are rearranged such that the same frequency coefficients are grouped together and queued with zigzag scan order, i.e., , where is the number of total blocks in the image and is the number of ACs for a certain quantization pattern, as shown in the upper image of Fig. 10(a). For turbo encoding, these regrouped DCs blocks are subject to bit-plane extraction, as shown in Fig. 10(a), such that the same significant bits are grouped together and transmitted by bit-plane order, i.e., for , where is the index of the original blocks and is the bit-plane index. For regrouped ACs blocks, the above transmission order is reversed, i.e., from the LSB to the MSB. The bit-stream of these reordered bits, , is then used as the input to the CRC encoder, which appends checksum of and passes it to the turbo encoder. After performing interleaving by the turbo encoder, it yields the parity bit-streams, , which can be represented as and . Both parity bit streams are punctured with specific patterns of period to form sub-blocks queued in the transmission buffer, denoted as and , which will be sent to the decoder upon request. The puncture pattern is designed to select parity bit according to the specified priority, as shown in Fig. 10(b). For turbo decoding, the skipped systematic bits at are replaced with the reconstructed SI at , which would be reconstructed by different methods. The turbo decoder would request more parity bits in case it cannot correctly recover the data. In general, when the SI confidence is high, it would request fewer parity bits and improve the WZF quality. Detailed rate control steps will be described in Sec. 3.2. To reconstruct the WZF, , from the received parity bits sub-block, , at the WZ decoder shown in Fig. 11, it needs to generate the SI, , by the interpolation/homography module, as shown in Fig. 8. Before turbo decoding, the same and processes will be applied to to yield and , respectively. To increase the SI confidence for turbo decoding, the distributions of error between reconstructed SIF and the original WZF are modeled as Laplacian. A transform-domain correlation noise model parameter updating procedure29 was applied to fit coefficient error distribution for each block with the Laplacian model. Since the original image encoded as a WZF is not available at the decoder, the MCTI image, , interpolated from , was used instead. After being processed by T and Q, the indexed signals, , are reordered, grouped, and extracted by bit-plane to provide the system bits, , for the turbo decoder. The turbo decoder performs the logarithmic maximum a “posterior” algorithm, Log-Map, with the help of received parity bits sub-blocks, , and CRC checksum verification, under a certain confidence measurement40 to determine either the decoding process is convergent or to request more bits for the next iteration. After being decoded correctly, it is reversely processed by the combining bit-plane module to yield the quantized index, , which are used as the input of the reconstruction module to refine for . The optimal reconstruction function that exploits the correlation between the original image for WZF and SI14 is adopted, in which the distribution of the residual signals between the original WZF and the reconstructed SIF is assumed to be Laplacian and it seeks to find the reconstructed samples that demonstrate MMSE. The optimal reconstruction value, , is the expectation , where denote the lower/upper boundary of the interval that resides, and the expected value yields the MMSE estimation of the source WZ. This procedure will prevent the reconstructed values from deviating from the original value too much due to low SI confidence. At the last stage, the will be inversely transformed to yield the final reconstructed image, . 3.2.Rate Control MechanismTo improve the decoding efficiency, we proposed to impose specific puncture patterns with transmission order according to signal distribution properties for DCs and ACs, respectively. In the COMPETE framework, we proposed to collect all same order DCs/ACs together, which are then zig-zag scanned for turbo encoding. For block DCT-based video coding, the DC coefficient usually contains most block signal energy. Its MSBs contribute much more signal energy than LSBs, such that the assigned priority of the former is higher than the latter. As shown in Fig. 10(b), the system is designed to transmit the first MSBs of all DCs and then the second MSBs. The magnitude of ACs would be much smaller and around zero magnitude. Since ACs may be positive or negative, by taking its absolute value, it would lead to more skewed magnitude probability distribution. The “sign bit” of quantized ACs can be replaced by that of the quantized SIF at the decoder, under which the probability of LSBs to be 0 would be larger than MSBs, when represented with a fixed number of bits. As opposed to DCs, it transmits the LSB first and then the second LSB41 to speed up turbo decoding, as shown in Fig. 10(c). This transmission strategy for DCs and ACs helps to correct the decoding errors of systematic bits with fewest requested parity bits. Experiments showed that this rate control strategy yields 55% to 59%, fewer requested bits for the turbo decoder. The proposed rate control algorithm, developed based on the RCPT puncturing mechanism,28 is demonstrated in Fig. 12. In the COMPETE system, the RCPT code is designed to be with rate and puncturing period , which is formed from two rate recursive systematic convolutional constituent codes with generator . The puncturing table with different rates, , will be generated, in which will not be used because the systematic bits will be discarded under the DVC framework. Figure 10 demonstrates part of the corresponding puncture table. When the first sub-block parity bits were received, the decoding would be carried out based on the CRC alone.28 When receiving those of the second sub-block, it would decode the first constituent encoding data and the iterative turbo decoding will start after the third sub-block being received, in which the maximum iteration number, , is set. When decoded results are converged, i.e., an all-zero syndrome of CRC checking or the number of iteration exceeds , the resultant bitstream will be subjected to a second confirmation procedure. Notwithstanding, a larger will lead to heavy computation and the tradeoff between setting and heavy computation should be well manipulated. The value of is determined from experiments on different complexity test videos under different bit rates that can yield convergence. The confidence measurement with the criteria ConfPr ,40 in which where is the predefined block length for decoding and is number of uncertain bits whose absolute decoded likelihood ratio, , is not higher than 0.99. The decoding is successfully completed when both CRC check and confidence measure, , are satisfied. When CRC passes but confidence measure fails, i.e., , more sub-block parity bits will be requested by the ARQ mechanism for the next iteration operations until all sub-block bits are sent or the bitstream is decoded successfully.To improve the turbo decoding performance while requesting fewer parity bits, the correlation among coefficient bit planes was exploited and utilized to estimate the posteriori probability, which is used as the priori probability for turbo decoding. The probability distribution of the difference between a SIF and the original image coded as a WZF is assumed to be Laplacian, i.e., where is the variance of residual signal between a WZF and a SIF.29 The ’th decoded bit of DCs () is represented as where is the posteriori probability of for . When decoding of , both and , and the reconstructed SI, , are jointly referenced to specify the probability. Figure 13 shows an example to estimate the probability 40 of a quantized DCs represented with four bits, from MSB to LSB. The probability integrated from the shaded interval is for and can be calculated in the similar way. The turbo decoder will update the priori probability: and performs log-MAP decoding. Experiments verified that this probability estimation and updating method helps the decoder to request fewer parity bits and reduces the turbo decoding time.4.Simulation StudyThe COMPETE encoding performance is compared with other SIF reconstruction methods, such as MCTI, fusion-based homography (F-HOMO), MVME, and H-MVME, for evaluation. The H-MVME is the extension of MVME,26 in which estimated image blocks that reference to the outlier are obtained through MCTI. In the F-HOMO, both SIFs reconstructed from inter- and intraview images through DCVP42 and MCTI, respectively, are fused to yield the final SIF. The quality of , which is reconstructed with its SIF generated by the above methods, is compared with those from H.264 with inter-, intra-, and inter-no-motion mode. The multiview CIF videos, Race1, Ballroom, Breakdancer, Exit, Ballet and Vassar, provided by ISO/IEC43 are used as test videos, whose frame rates are 30, 25, 15, 25, 15, and 25 fps, respectively. These videos present different scene complexities rated from high to low, in which the “Race1, Ballroom, and Breakdancer” are classified as high complexity videos, “Exit” as medium and (Ballet and Vassar) as low complexity ones, respectively. Three successions of the six views from a multiview video are used to provide left-, central- and right-view videos. For H.264, the CABAC function is enabled and GOP size is 12 for inter- and inter-no-motion modes. The ME search range for the former is set to be 32 and zero motion is assigned for the latter. For the H.264 coder to yield compromised decoded quality for different videos, different quantization parameters (QPs), , are used for different complexity videos. The MDVC codec adopts , in which the side-view video and central-view odd frames are encoded with H.264/INTRA to provide KFs for the decoder to reconstruct . The quality of reconstructed with reference to the four SIF generation methods is compared by image PSNRs for evaluation. 4.1.Performance AnalysisTo evaluate the performance of the proposed COMPETE, the error analysis based on reconstructed blocks is first carried out to investigate the signal processing behavior. Four SIF reconstruction methods, which comprise MCTI, F-HOMO, MVME, and H-MVME, are also implemented for comparisons. The SI confidence, quality of reconstructed WZFs, , and time complexity of different methods are compared and evaluated. The time complexity of SI generation and encode/decode execution time will be discussed in Sec. 4.2. 4.1.1.Error analysisThe error distributions of the COMPETE and MVME are investigated to justify how the SI confidence can be improved. In the COMPETE algorithm, by performing intraview ME between central-view images, blocks are classified into motion or no-motion to eliminate unnecessary ME/MC operations. For no-motion blocks, the co-located block of the previous frame is used as the MC blocks with zero motion. For motion blocks, when the search range comprises regions belonging to the outlier, only intraview ME on central-view images is performed. Otherwise, the regular weighted MVME process is carried out. Denote the number of no-motion, motion, and outlier blocks in one frame as , , and , which can be normalized as , , and , respectively, i.e., . In the COMPETE, the MC interpolated frame can now be represented by where denote the set of type blocks for and the number of blocks in the set is . The variance of block errors can be represented asFor one image block, a specific ME procedure corresponding to its categorization, i.e., motion, no-motion, or outlier, will be imposed. Table 1 shows the percentage of each block category for different videos and Table 2 shows the mean absolute error of block difference, between the original image and its reconstructed SIF, for the six test videos. As shown, the percentage of outlier blocks is very small and their average reconstruction error by the COMPETE is smaller than that of MVME. Both estimated intraview MV and interview disparity vector are utilized to improve the SI confidence, in which the four MVs through inner paths are utilized to perform intraview weighted MC for a central-view SIF. This SIF demonstrated higher confidence than that reconstructed through average MC in both MVME and H-MVME. As shown in Table 2, the average error of reconstructed blocks of the proposed COMPETE is smaller than that of MVME. Table 1 shows that the percentage of no-motion blocks is the highest, which are mostly from the background region or static foreground objects. For no-motion blocks, the proposed COMPETE effectively eliminated the time consuming ME process and prevented noisy MVs resulting from regular ME process of other methods. For example, the MVME method, instead of identifying no-motion blocks and skipping the time-consuming ME process, treats all as motion blocks but does not yield a more accurate estimation, as shown in Table 2. For motion blocks, the MVME does not differentiate interview disparity vector with intraview MV, such that the MC blocks would be more degraded as compared to that of COMPETE. As the COMPETE compensates no-motion blocks by the colocated ones of the previous decoded frame, in addition to reducing time complexity, the ME errors can also be decreased. In total, the proposed COMPETE effectively yielded higher SI confidence while reducing time complexity, as compared to MVME. Table 1No-motion, motion, and outlier blocks distribution at QP=26.
Table 2The comparison of estimation errors.
4.1.2.Side information confidenceThe SI confidence in PSNR achieved by MCTI, F-HOMO, MVME, the COMPETE with direct linear transform (DLT) homography matrix generation method and the COMPETE performed on all test videos are shown in Fig. 14. As shown, the MCTI performance was severely degraded for high motion videos, Race1, Ballroom and Breakdancer, since it assumes linear motion and interpolates frames only along temporal dimension. For Race1, the SIF by COMPETE is 6.2 to 7.9 dB higher in PSNR than MCTI because it is a panning shot of moving objects such that MCTI cannot find the correct MVs to reconstruct SI. For the F-HOMO, it adopts pixel-based fusion and would lead to image discontinuity artifacts when fusing disparity synthesized and temporal interpolated (MCTI) images. The H-MVME outperforms MVME26 with 0.5 to 3 dB higher PSNR for both high and low complexity videos. For MVME, it performs ME from both inter- and intraview KFs, which may lead to false/trivial ME and degraded quality, in addition to being time consuming. The H-MVME improves the MVME by eliminating the interview disparity. The proposed COMPETE estimated MVs with reference to perspectively transformed images, , and detected no-motion blocks to eliminate regular ME operations. The SIFT followed by RANSAC would help to yield more stable matching point pairs, as compared to the COMPETE followed by DLT, as shown in Fig. 14. In comparison, the proposed COMPETE not only achieves the same reconstructed image quality as that of H-MVME but also decreases computation complexity. For the “Ballet,” the SIF by COMPETE is 0.1 to 2.3 dB higher in PSNR than H-MVME because the disparity problem of interview ME has been solved by the block prediction through perspective transform. In comparison, the COMPETE effectively reduced computational complexity and well utilized interview and temporal correlations to eliminate disparity block matching noises. Experiments also justified that the proposed COMPETE can yield the best SI confidence, as compared to the others. 4.1.3.Objective performance evaluationThe PSNRs of coded by the five methods under the MDVC framework and reconstructed images by H.264, with intra-, inter- and inter-no-motion, are calculated for comparisons. The rate-distortion performance is similar to that of the SI confidence. For high-complexity videos, e.g., Race1, Ballroom, and Breakdancer, the SI confidence in PSNRs is comparable to COMPETE and H-MVME, both of which are 0.9 to 7.8 dB higher than MCTI and F-HOMO, as shown in Fig. 14. The reconstructed WZFs with the COMPETE, , are 0.8 to 2.9 dB higher in PSNR than those of MCTI and F-HOMO, as shown in Figs. 15(a)–15(c). For high-complexity videos, both MCTI and F-HOMO cannot estimate accurate MVs to compensate for the reconstructed SIFs, which leads to more degraded WZFs. Both COMPETE and H-MVME yield higher SI confidence and hence better reconstructed quality for . The COMPETE yielded 0.4 to 1 dB higher PSNR than H.264/INTRA for Breakdancer, 1 to 1.5 dB higher than H.264/INTRA for Ballroom and 0 to 0.5 dB higher than H.264/INTRA for Race1. The H.264 intra/inter-no-motion cannot well encode Race1, because the camera was tracking a moving object. For the medium-complexity video, Exit, the SI confidence in PSNRs reconstructed by the COMPETE is 2.4 to 3.9 dB higher than those of MCTI, as shown in Fig. 14. The average PSNRs of are 3.5 and 2.5 dB higher than those reconstructed from H.264 intra and MCTI, respectively, as shown in Fig. 15(d). For low complexity videos, Ballet and Vassar, as they demonstrate more static regions, the interpolation and fusion process can perform efficient for all methods and results in smaller difference of PSNR performances. The COMPETE yielded 0.8 to 2 dB higher PSNR than MCTI for , and 1.5 to 2.2 dB higher than H.264/INTRA, as shown in Figs. 15(e) and 15(f). In addition, although the MVME-based methods,26 e.g., MVME, and H-MVME, demonstrate comparable PSNR performances with COMPETE, their time complexity is high. Experiments showed that the COMPETE outperforms the others in SIF and WZF, , quality, in that it prevents reconstructing blocks in static regions from noise attacks during interpolation and block matching processes. Note that the KF quality setting would impact the SI confidence, and the KF quality depends on QP selection. To justify the COMPETE capability in improving the MDVC codec performance, the average image PSNR of KFs and WZFs under a fixed bit budget is provided for comparisons. As shown in Fig. 16, the COMPETE outperforms the others in PSNRs from 0.4 to 4 dB under different bitrates for both high and low complexity videos, Race1 and Vassar, respectively. Experiments revealed that high confidence SI is much more important than the rate control method in DVC coding: (1) When SI confidence is low, the decoding confidence measure, ConfPr in Eq. (5), would not satisfy convergence condition, . Under this condition, either the rate control procedure was carried out or the decoder requested more parity bits, and the ConfPr could hardly converge. (2) When SI confidence is high enough and the rate control procedure transmits high priority parity bits first, the number of decoding iterations would be reduced and the convergence criteria, , would be reached quickly. One practical turbo decoder example44 shows that when KFs are severely attacked by channel noise, which leads to low confidence SI, the PSNRs of reconstructed WZFs will degrade rapidly because the turbo decoder cannot recover one WZF from a severely degraded SIF. The number of average requested bits and bit rate saving under different SIF reconstruction methods is provided and compared in Tables 3 and 4, respectively. As shown in Table 3, the proposed COMPETE requested the fewest parity bits among the four methods because it can yield the highest SI confidence. Table 4 shows that the proposed control mechanism enables the four SI reconstruction methods to largely reduce the requested bit rates. Table 3The average requested bit rate of different SIF generation methods (15 FPS QCIF).
Table 4The turbo decoded bit rate comparisons W/ AND W/O rate control mechanism (15 FPS QCIF).
4.2.Time Complexity AnalysisThe time complexities of the proposed COMPETE, together with the other SI reconstruction methods, are analyzed and discussed. At first, the number of arithmetic operations, addition/subtraction and multiplication/division, required to reconstruct SI is calculated for time complexity analysis. The practical execution time is also measured to justify the time analysis. Denote the image width and height as and , respectively, and the block size and search range as and , respectively. 4.2.1.Motion Compensated Temporal InterpolationThe MCTI performs intraview ME between images and and then performs motion compensated prediction to interpolate SI for WZFs. It performs subtraction and addition operations to yield the absolute difference summation. For one block, it needs subtractions and additions to calculate the block error. As the search area is , it requires operations for one block to finish ME operations. The number of total operations for one image to finish ME is . The time complexity of MCTI is denoted as . 4.2.2.Fusion-based homographyThe fusion-based homography was implemented based on the Fusion 1 algorithm in Ref. 15. After performing perspective transformation, the synthesized perspectively transformed images, denoted as , and the temporarily interpolated image, , are considered as candidates for the fusion-based central-view image. For each pixel of the SIF to be reconstructed, it seeks to find the one, between and , that yields the minimum distance to both the previous and the next central-view image pixel values. Estimation of the initial homography matrix can be performed off-line, whose time complexity can be ignored. For perspective transformation, it needs 15 MUL/ADD operations for each pixel and to yield the two reference central-view images. To obtain the fusion-based image, it needs and for temporal interpolation. In total, it needs operations to find the pixel that yields the minimum pixel value difference. The number of total operations for the fusion-based homography is . The time complexity of this method is denoted as . 4.2.3.Hybrid Multiview Motion EstimationThe H-MVME is an improved MVME.26 In MVME, four ME vectors through inner paths are obtained and averaged to yield the motion compensated prediction image. As the MVME algorithm is designed based on the assumption that when the optical axes of all cameras are orthogonal to the motion. For multiview video, homography transformation is required and there exists an outlier that the MVME may not applicable. In H-MVME, it performs bidirectional temporal MC when the search range resides on the outlier. The required operations comprise performing the four inner paths ME , calculating weights (three ADDs and eight DIVs) and calculating the average . The number of total operations is . Its time complexity is smaller than that of , which is .26 4.2.4.COMPETEThe design target of the proposed COMPETE is to keep high quality reconstruction while reducing computation complexity. At first, it needs to perform perspective transformation from side-view images to be with central view, which requires operations (three left- and three right-view images). Then, it performs block ME and checks whether it is a motion block or not. It needs at least operations. Assume the ratio of motion and no-motion blocks is . For no-motion block, direct copy from the co-located block of the previous image is adopted, and no operation is required. For motion blocks, the search range for finding disparity vectors can be minimized to in that the reference frames are perspectively transformed from side-view images. The COMPETE, as well as H-MVME, performs four inner paths ME two times. For interview ME, the first disparity vector estimation requires operations. The second ME after disparity compensation is . Finally, by including all the required operations for computing weights and average, the number of total operations is , which is denoted as . The above time complexity analysis shows that Experiments show that the execution time of COMPETE is only half that of H-MVME while achieving the same SI confidence. The execution time for COMPETE is only four times that of MCTI. 4.2.5.Practical execution time evaluationThe above time complexity analysis for different SI reconstruction methods is verified by practical execution time. All practical executions are implemented and executed on the same computer for fairness. The execution times of MDVC light encoder and H.264 encoder are first investigated. Table 5 lists the average encoding time for one frame by MDVC, H.264 intra, H.264 inter no motion and H.264 inter, respectively. As shown, the MDVC light encoder spends about 5 to 15 times less than the others, which justifies the above time analysis. Table 6 lists the average execution time for reconstructing one SIF by MCTI, F-HOMO, H-MVME, and COMPETE, respectively. As shown, the average execution time for reconstructing one SIF of H-MVME is about eight times that of MCTI. For the COMPETE, this average execution time can be largely reduced for lower complexity videos. As the probability to process motion blocks in high complexity videos is high, the percentage of time reduction is limited, which is 1.29 to 2.56 times less than that of H-MVME. Table 7 lists the average turbo decoding time for different SI reconstruction methods. The performance of time reduction was evaluated based on the MCTI execution time for simplicity. Experiments showed that the decoding time would be reduced for higher SI confidence, which justifies that the proposed COMPETE can provide better SI than the others. Table 5The average time to encode one image (QCIF) in MDVC, H.264 with intra, inter no motion and intercoding mode (MSEC/FRAME) and CIF ones are provided for comparisons.
Table 6The average time to construct one SIF (MSEC/FRAME).
Table 7The average time saving of turbo decoding with different SI reconstruction methods as compared to MCTI.
4.3.Subjective performance evaluationThe subjective performance of different methods carried out on test videos is presented in this section. The QP control parameter of H.264 is set to be 26. 4.3.1.Reconstructed Side Information FramesThe SIFs reconstructed by MCTI and F-HOMO demonstrate severe block artifacts, which can be smoothed by the proposed COMPETE and modified H-MVME. But the latter suffered block noise in low complexity videos due to performing regular interpolation and block matching that led to static block noises. The proposed COMPETE effectively eliminates this block noise through weighted compensation and prediction. 4.3.2.Reconstructed Wyner–Ziv FrameThe SI confidence affects the reconstructed WZF quality. For one reconstructed by MCTI and F-HOMO, due to low SI confidence, many image blocks cannot be well recovered from low confidence SI. In comparison, the COMPETE and H-MVME yield higher SI confidence and hence higher quality for . Although COMPETE and H-MVME demonstrate comparable PSNRs for , the former consumed less computations. The resultant images are shown in Fig. 17. Reconstructed videos demonstrate that moving objects, cars and persons, are blurred from MCTI and F-HOMO based WZFs, while both COMPETE and H-MVME effectively eliminate this artifact for slow-motion videos, e.g., legs in Breakdancer. 4.4.Practical ApplicationsThe WZ decoder combines the SI and the received parity bits to recover the original symbol. Additional parity bits would be requested if the original symbols cannot be reliably decoded. This request-and-decode process is repeated until an acceptable symbol error probability is reached.2 The rate control performed by the decoder can reduce encoder computational loading. This feedback also enables the decoder to flexibly control SI generation from simple to sophisticated approaches, which can help to adapt to different encoder applications. However, this feedback channel used as an interactive decoding procedure may also hinder practical applications that require independent encoding and decoding. Instead of adopting this “decode-and-request” procedure, the decoder could be implemented with a correlation estimation algorithm, in which the rates of previously reconstructed frames are used to predict the required rates sent to the encoder. Feedback free45 and unidirection DVC46 have been proposed to make decoder operations independent of those of the encoder. 5.ConclusionsFor a MVC that adopts DVC coding, MDVC, we proposed to utilize interview video correlations and exploit bit value probability distribution of transform coefficients under the block-DCT video codec framework to improve the SIF confidence and accuracy of decoded bits while speeding up the decoder rate control process. Contributions of this paper comprise (1) for specific multiview video applications, such as wireless video sensor and wireless video surveillance networks, the proposed MDVC utilizes the advantage of a DVC and multiview video framework to enable efficient and low complexity video encoding. Simulations verified that the MDVC can reduce encoding complexity to at least five times smaller than H.264/INTRA while enhancing the quality of reconstructed WZFs. (2) To improve the MDVC decoding performance, a multiview SI generation algorithm, COMPETE, was proposed to improve the quality of reconstructed SIF and WZFs. Both temporal correlation among intraview images and disparity correlations among interview images were well utilized to enhance WZF reconstruction. Simulation results showed that the PSNRs of reconstructed WZFs by COMPETE are 0.5 to 3.8 dB higher than those by MCTI when encoding low to high complexity videos. (3) To improve the MDVC rate control performance, we exploit the probability distribution of transform coefficient bits and reorder the transmission priorities of DCs and ACs, such that the turbo decoder would request the fewest bits to decode the WZF. Simulations demonstrate that the PSNRs of decoded WZFs are 0.2 to 3.5 dB higher than those encoded with H.264/INTRA under the same bit rates. The COMPETE also outperformed H-MVME with 0.15 to 2.93 dB higher image PSNRs, in which the H-MVME outperforms MVME with 0.5 to 1 dB higher PSNR. Besides, the COMPETE effectively reduced the computation complexity, which is 1.29 to 2.56 times smaller than other SI reconstruction methods on average. Some recent research on video coding focus on free-view video codec and transmission. The proposed SI reconstruction method, COMPETE, under the MDVC framework can be extended to enhance the performance of free-view video codec that has to handle dynamic and mobile encoders and view reconstruction, which are considered as our future research. The COMPETE can also be carried out with a pixel-level disparity model. In addition, how to embed a small amount of information at the encoder22 to improve the decoding efficiency, together with the pixel-level disparity model, are also considered as our future research. AppendicesAppendix:Linear Minimum Mean Squared ErrorThe LMMSE predictor is carried out to compute the for a MC block with four observations and can be represented as where and denote in the original WZF and in the reconstructed SIF, respectively. To minimize , it takes its first derivative as 0, i.e., : The optimal weights, , can be calculated through . This procedure can be carried out entirely at the encoder for higher accuracy but it conflicts with the design target of light encoding. For practical applications, as different videos demonstrate different MVs and the original WZF, , is not available at the decoder, the can be replaced by the MCTI frames, which are interpolated from and at the decoder. The LMMSE predictor in Eq. (13) is utilized for the MC to yield optimal weights for individual blocks reached with MV, , instead of assigning the heaviest weight, , for the block with the minimum SAD. Since only lossy reconstructed KFs are available at the decoder and the block with a MV of minimum SAD cannot always promise a best matched block. When KFs compression ratios are different, the MV prediction results will also be different and unstable. This optimally weighted MC effectively exploited interview disparity correlation for assigning different weights for blocks with different MEs, which can prevent block-based full search from trivial/unstable matching and increase prediction accuracy. Experiments showed that assigning weights obtained from the LMMSE estimator can improve the SIF PSNR up to 0.1 dB and 0.3 to 0.4 dB for low and medium-to-high complexity video, respectively, as compared to that with weights proportional to block fidelity [Eq. (2)]. The PSNR improvement would depend on the accuracy of the four MVs, , which would be degraded when encoding higher complexity videos. Under this condition, the difference among the four MVs would be enlarged, and the LMMSE estimator can help to yield stable weights for fusion blocks with different MEs. For low complexity videos, both the LMMSE estimator and normalized fidelity-based weighting strategy demonstrated comparable performances.AcknowledgmentsThis work is partially supported by the Taiwan Ministry of Science and Technology with Grant No. MOST 105-2221-E-011-116 and Taiwan Building Technology Center with Grant No. IBRC 105H451709. ReferencesT. W. A. Vetro and G. Sullivan,
“Overview of the stereo and multiview video coding extensions of the h.264/MPEG-4 AVC standard,”
626
–642
(2011). http://dx.doi.org/10.1109/JPROC.2010.2098830 Google Scholar
A. M. A. B. Girod and S. D. Rebollo-Monedero,
“Distributed video coding,”
Proc. IEEE, 93 71
–83
(2005). http://dx.doi.org/10.1109/JPROC.2004.839619 Google Scholar
A. D. L. Z. X. Ziong and S. Cheng,
“Distributed source coding for sensor networks,”
IEEE Signal Process. Mag., 21
(5), 80
–94
(2004). http://dx.doi.org/10.1109/MSP.2004.1328091 ISPRE6 1053-5888 Google Scholar
M. Flierl and B. Girod,
“Coding of multi-view image sequences with video sensors,”
in Int. Conf. Image Processing,
609
–612
(2006). Google Scholar
X. Guo and Y. Lu,
“Distributed multiview video coding,”
Proc. SPIE, 6077 60770T
(2006). http://dx.doi.org/10.1117/12.642989 Google Scholar
D. Slepian and J. Wolf,
“Noiseless coding of correlated information sources,”
IEEE Trans. Inf. Theory, 19 471
–480
(1973). http://dx.doi.org/10.1109/TIT.1973.1055037 IETTAW 0018-9448 Google Scholar
A. Wyner and J. Ziv,
“The rate-distortion function for source coding with side information at the decoder,”
IEEE Trans. Inf. Theory, 22
(1), 1
–10
(1976). http://dx.doi.org/10.1109/TIT.1976.1055508 IETTAW 0018-9448 Google Scholar
ISO,
“Information technology-coding of audio-visual objects-part 10: advanced video coding,”
(2004). https://www.cmlab.csie.ntu.edu.tw/~cathyp/eBooks/14496_MPEG4/iso14496-10.pdf Google Scholar
R. Z. A. Aaron, S. Rane and B. Girod,
“Wyner-Ziv coding for video: applications to compression and error resilience,”
in Proc. IEEE Data Compression Conf.,
93
–102
(2003). http://dx.doi.org/10.1109/DCC.2003.1194000 Google Scholar
C. Z. Y. Cao, S. Gao and G. Qiu,
“Towards practical distributed video coding for energy-constrained networks,”
Chin. J. Electron., 25
(1), 121
–130
(2016). http://dx.doi.org/10.1049/cje.2016.01.019 CHJEEW 1022-4653 Google Scholar
C. Yeo and K. Ramchandran,
“The theory of a general quantum system interacting with a linear dissipative system,”
Ann. Phys., 19
(4), 995
–1008
(2010). http://dx.doi.org/10.1109/TIP.2009.2036715 Google Scholar
F. D. M. Ouaret and T. Ebrahimi,
“Iterative multiview side information for enhanced reconstruction in distributed video coding,”
EURASIP J. Image Video Process., 2009 591915
(2009). http://dx.doi.org/10.1155/2009/591915 Google Scholar
E. A. X. Artigas and L. Torres,
“Side information generation for multiview distributed video coding using a fusion approach,”
in Proc. of Nordic Signal Processing Symp.,
250
–253
(2006). Google Scholar
D. Kubasov, J. Nayak and C. Guillemot,
“Optimal reconstruction in Wyner-Ziv video coding with multiple side information,”
Proc. Multimedia Signal Process. (MMSP) Workshop, 183
–186
(2007). http://dx.doi.org/10.1109/MMSP.2007.4412848 Google Scholar
F. D. M. Ouaret and T. Ebrahimi,
“Fusion-based multiview distributed video coding,”
in Proc. of ACM Video Surveillance and Sensor Networks,
139
–144
(2006). Google Scholar
Y. W. H. Yin, M. Sun and Y. Liu,
“Fusion side information based on feature and motion extraction for distributed multiview video coding,”
in Visual Communications and Image Processing Conf.,
414
–417
(2014). Google Scholar
M. C. T. Maugey, W. Miled and B. Pesquet-Popescu,
“Fusion schemes for multiview distributed video coding,”
in Signal Processing Conf.,
559
–563
(2009). Google Scholar
F. Dufaux,
“Support vector machine based fusion for multi-view distributed video coding,”
Int. Conf. Digital Signal Process. (DSP), 1
–7
(2011). http://dx.doi.org/10.1109/ICDSP.2011.6005004 Google Scholar
S. Shimizu et al.,
“Improved view interpolation for side information in multiview distributed video coding,”
in Int. Conf. on Distributed Smart Cameras,
1
–8
(2009). Google Scholar
G. Petrazzuoli et al.,
“Novel solutions for side information generation and fusion in multiview dvc,”
EURASIP J. Adv. Signal Process., 2013 154
(2013). Google Scholar
S. Shimizu and H. Kimata,
“View synthesis motion estimation for multiview distributed video coding,”
in European Signal Processing Conf.,
2057
–2061
(2010). Google Scholar
M. Makar et al.,
“Quality-controlled view interpolation for multiview video,”
in Int. Conf. Image Processing,
1805
–1808
(2011). Google Scholar
F. Pereira, J. Ascenso and C. Brites,
“Studying the GOP size impact on the performance of a feedback channel-based Wyner-Ziv video codec,”
801
–815
(2007). http://dx.doi.org/10.1007/978-3-540-77129-6_68 Google Scholar
D. G. Lowe,
“Distinctive image features from scale-invariant keypoints,”
Int. J. Comput. Vision, 60
(2), 91
–110
(2004). http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94 IJCVEQ 0920-5691 Google Scholar
R. C. Bolles and M. A. Fischler,
“Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,”
Commun. ACM, 24
(6), 381
–395
(1981). http://dx.doi.org/10.1145/358669.358692 Google Scholar
E. A. X. Artigas and L. Torres,
“A comparison of different side information generation methods for multi-view distributed video coding,”
in Proc. SIGMAP,
450
–455
(2007). Google Scholar
A. S. Barbulescu and S. S. Pietrobon,
“Rate compatible turbo codes,”
Electron. Lett., 31 535
–536
(1995). http://dx.doi.org/10.1049/el:19950406 Google Scholar
D. N. Rowitch and L. B. Milstein,
“On the performance of hybrid FEC/ARQ system using rate compatible punctured turbo (RCPT) codes,”
IEEE Trans. Commun., 48
(6), 948
–959
(2000). http://dx.doi.org/10.1109/26.848555 Google Scholar
C. Brites and F. Pereira,
“Correlation noise modeling for efficient pixel and transform domain Wyner-Ziv video coding,”
IEEE Trans. Circuits Syst. Video Technol., 18
(9), 1177
–1190
(2008). http://dx.doi.org/10.1109/TCSVT.2008.924107 Google Scholar
M. Salmistraro et al.,
“Joint disparity and motion estimation using optical flow for multiview distributed video coding,”
in European Signal Processing Conf.,
286
–290
(2014). Google Scholar
N. Deligiannis et al.,
“Side-information-dependent correlation channel estimation in hash-based distributed video coding,”
IEEE Trans. Image Process., 21
(4), 1934
–1949
(2012). http://dx.doi.org/10.1109/TIP.2011.2181400 IIPRE4 1057-7149 Google Scholar
P. Márquez-Neila et al.,
“Improving RANSAC for fast landmark recognition,”
in Proc. Computer Vision and Pattern Recognition Workshop,
1
–8
(2008). Google Scholar
K. Mikolajczyk and C. Schmid,
“A performance evaluation of local descriptors,”
IEEE Trans. Pattern Anal. Mach. Intell., 27
(10), 1615
–1630
(2005). http://dx.doi.org/10.1109/TPAMI.2005.188 ITPIDJ 0162-8828 Google Scholar
R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed.Cambridge University Press, England
(2004). Google Scholar
C. B. J. Ascenso and F. Pereira,
“Content adaptive Wyner-Ziv video coding driven by motion activity,”
in Proc. IEEE Int. Conf. on Image Processing,
605
–608
(2006). http://dx.doi.org/10.1109/ICIP.2006.312408 Google Scholar
A. Z. P. H. H. Torr,
“MLESAC: a new robust estimator with application to estimating image geometry,”
Comput. Vision Image Understanding, 78
(1), 138
–156
(2000). http://dx.doi.org/10.1006/cviu.1999.0832 CVIUF4 1077-3142 Google Scholar
M. F. D. Varodayan, D. Chen and B. Girod,
“The theory of a general quantum system interacting with a linear dissipative system,”
EURASIP Signal Process. Image Commun., 23
(5), 369
–378
(2008). http://dx.doi.org/10.1016/j.image.2008.04.009 Google Scholar
E. S. A. Aaron, S. Rane and B. Girod,
“Transform-domain Wyner-Ziv codec for video,”
Proc. SPIE, 5308 520
–528
(2004). http://dx.doi.org/10.1117/12.527204 Google Scholar
K. Sayood,
“Introduction to data compression,”
(2012). Google Scholar
K. L. D. Kubasov and C. Guillemot,
“A hybrid encoder/decoder rate control for Wyner-Ziv video coding with a feedback channel,”
in IEEE Workshop on Multimedia Signal Processing,
251
–254
(2007). http://dx.doi.org/10.1109/MMSP.2007.4412865 Google Scholar
S. K. Y. Vatis and J. Ostermann,
“Inverse bit plane decoding order for turbo code based distributed video coding,”
in IEEE Int. Conf. on Image Processing,
1
–4
(2007). http://dx.doi.org/10.1109/ICIP.2007.4379077 Google Scholar
F. D. M. Ouaret and T. Ebrahimi,
“Mulitiview distributed video coding with encoder driven fusion,”
in Proc. of European Signal Processing Conf.,
(2007). Google Scholar
ISO,
“The theory of a general quantum system interacting with a linear dissipative system,”
(2005). ftp://ftp.merl.com/pub/avetro/mvc-testseq/ Google Scholar
J. J. Chen et al.,
“A multiple description video codec with adaptive residual distributed coding,”
IEEE Trans. Circuits Syst. Video Technol., 22
(5), 754
–768
(2012). http://dx.doi.org/10.1109/TCSVT.2011.2179459 Google Scholar
J. L. Martinez,
“Feedback free DVC architecture using machine learning,”
in Proc. IEEE Int. Conf. Image Processing,
1140
–1143
(2008). http://dx.doi.org/10.1109/ICIP.2008.4711961 Google Scholar
W. A. C. F. M. Badem and A. M. Kondoz,
“Unidirectional distributed video coding using dynamic parity allocation and improved reconstruction,”
in Int. Conf. Info. Automation for Sustainability,
335
–340
(2010). Google Scholar
BiographyShih-Chieh Lee received his PhD from the National Taiwan University of Science and Technology in 2013 in electrical engineering. He is currently working at Nokia Networks as a network planning and optimization engineer. His research interests include image/video processing and the related topics in multimedia communications. Jiann-Jone Chen received his PhD from the National Chiao-Tung University in 1997 in electronic engineering. He was a researcher with the Advanced Technology Center, Information and Communications Research Laboratories, Industrial Technology Research Institute (ITRI), Hsinchu. He is currently an associate professor in the Electrical Engineering Department of National Taiwan University of Science and Technology. His research interests include image/video processing, cloud video processing/streaming, image retrieval, and several topics in multimedia communications. Yao-Hong Tsai received his PhD in information management from the National Taiwan University of Science and Technology (NTUST), Taipei, Taiwan, in 1999. He was a researcher with the Advanced Technology Center, Information and Communications Research Laboratories, Industrial Technology Research Institute (ITRI), Hsinchu. He is currently an associate professor with the Department of Information Management, Hsuan Chuang University, Hsinchu. His current research interests include image processing, pattern recognition, and computer vision. |