CN103763564B - Depth map encoding method based on edge lossless compress - Google Patents

Depth map encoding method based on edge lossless compress Download PDF

Info

Publication number
CN103763564B
CN103763564B CN201410008702.4A CN201410008702A CN103763564B CN 103763564 B CN103763564 B CN 103763564B CN 201410008702 A CN201410008702 A CN 201410008702A CN 103763564 B CN103763564 B CN 103763564B
Authority
CN
China
Prior art keywords
edge
pixel
seed
foreground
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410008702.4A
Other languages
Chinese (zh)
Other versions
CN103763564A (en
Inventor
王安红
赵利军
武迎春
武晓嘉
张�雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN201410008702.4A priority Critical patent/CN103763564B/en
Publication of CN103763564A publication Critical patent/CN103763564A/en
Application granted granted Critical
Publication of CN103763564B publication Critical patent/CN103763564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A kind of depth map encoding method based on edge lossless compress, belongs to 3D field of video encoding, it is characterized in that comprising the following steps: rim detection based on threshold value;Chain code is separately encoded prospect, background edge;Forward difference prediction encoder edge pixel value;Down-sampling;Forward difference prediction coding drawing of seeds picture;Arithmetic coding residual sequence and chain code sequence;Transmission binary code stream;Arithmetic decoding;Chain code decoding prospect, background edge;Forward prediction differential decoding;Restored species subimage;Rarefaction representation drawing of seeds picture;Using and rebuild based on partial differential equation method for reconstructing and natural neighbour interpolation method for reconstructing, can be restored image.Its advantage is can effectively to excavate the characteristic that the smooth region of depth map itself is split by sharpened edge, it is possible to improves depth map encoding performance significantly, also improves the rendering quality of virtual view simultaneously.

Description

Depth map encoding method based on edge lossless compress
Technical field
The invention belongs to 3D technical field of video coding, be specifically related to a kind of depth map encoding based on edge lossless compress Method.
Background technology
Along with the development of electronic product, 3D video (Three dimensional video, 3DV) has become life & amusement An indispensable part, it is however generally that, 3DV has 3D TV (3D Television, 3DTV) and free viewpoint video (Free Viewpoint video, FVV) two kinds of application.Traditional 3D video utilizes the camera acquisition video field of two diverse locations Scape, is then presented respectively to two eyes by the video of two video cameras, and left and right two eyes are it can be seen that have parallax not With the video at visual angle, thus in human brain, present third dimension.But, on the one hand, tradition 3D system only has the video of two viewpoints, And the parallax range between two video cameras often much larger than our pupil distance, so makes thing on two viewpoint videos Stereoscopic difference is very big thus causes visual fatigue and the sense of discomfort of human eye;On the other hand, increasing family 3D entertaining customers Require to watch 3D video under any number of different angles, to this, need to gather 20 the most up to 50 differences of transmission and regard The video of point, the defeated bandwidth of the hugest data mutual transmission and memory capacity pose a big pressure, in order to meet above-mentioned both sides Needing, in reality, we are through frequently with having only to transmit a few viewpoint video data, and use rendering technique at receiving terminal Synthesize other unknown virtual views, thus reduce the video data volume needing transmission.
A kind of new 3D video format i.e. multi-view point video plus depth (i.e. MVD, Multi-View Video occurs recently Plus Depth), it is better described the geological information of 3D scene by increasing a depth information, and can be needed for reducing Under the conditions of video information, utilize the virtual view that depth information synthesis is satisfied, therefore obtain the extensive pass of academia and industrial quarters Note.MVD specifically includes the two kinds of information describing multi-view point video: texture video and deep video, wherein texture video refers to often The video information of rule, including colourity and the monochrome information of scene;And deep video describes the distance between scene and video camera, logical Each pixel of the most every frame depth map represents with 8 bits.Although utilizing depth information can realize the drafting of virtual view, but The information i.e. depth map information of a dimension that MVD video format is more than traditional 3D video format is therefore many except to compress Viewpoint video, it is also desirable to depth map is compressed, and in compression depth figure, there is a lot of office in traditional method for encoding images Limit.
To depth map encoding method, everybody encodes depth map as a secondary gray-scale map at present, uses traditional Encryption algorithm based on converter technique (such as JPEG2000, H.264/AVC) carrys out compression depth figure.But based on the encoder converted It is impossible to ensure that the reservation of the discontinuous information of depth map after Bian Ma, in order to retain the information of more area-of-interest, there is article pin Depth map encoding is proposed the extended coding device of JPEG2000 based on area-of-interest.But all methods based on conversion The most all can cause the concussion artifact of edge, this artifact can cause can not correctly projecting during drawing viewpoints, And then affect rendering quality.
Another kind of depth map encoding method is grid depth map segmentation, uses piecewise linear function to describe each segmentation Region, describes segmentation by corresponding tree construction simultaneously.Although these methods can represent sharp keen edge very well, but still So it is not provided that the accurate expression of depth map boundary pixel.In order to realize describing more accurately degree of depth discontinuity zone, This method needs image to be divided into Pixel-level, but ultimately results in the code check increase describing tree construction.
The coded method also having a class depth map uses the depth coding method lossless based on edge, and it needs rim detection Or segmentation, but the edge detection method of classics can not detect required edge very well, proposes to use despite article Sobel edge detects, but the amount of edge that this edge detection method detects is the most, and at edge-description Time real marginal information can not be described simply and effectively.
Summary of the invention
It is an object of the invention to provide and a kind of can effectively utilize the characteristic of depth map to carry out coding depth figure new method, it is possible to Realize the decoding lossless reconstruction in image border thus improve the distortion performance of depth map encoding and the rendering quality of virtual view.
The present invention is achieved in that and it is characterized in that comprising the following steps:
1. the first step: reading in a width depth map D, its resolution is p × q, sets edge detection threshold as λ1, down-sampling rate Be 1/ (m × n) i.e. in D in the image block of each m × n size sample pixel, wherein a λ1Span is λ1>=2, and The span of m and n is respectively 2≤m≤16 and 2≤n≤16;
Second step: the coding to depth map D edge, comprises the following steps
[1]. ask for foreground edge image EforegroundWith background edge image Ebackground:
According to edge detection threshold λ set1, to the current pixel D of depth map D (x, y) and upper neighbor D (x, Y-1), lower neighbor D (x, y+1), left neighbor D (x-1, y), right neighbor D (x+, 1y) utilize formula (1) to carry out Relatively, retain big pixel value and obtain foreground edge image Eforeground, retain little pixel value and obtain background edge image Ebackground, that is:
[2]. use chain code to describe EforegroundEach edge, and all chain codes are connected, obtain foreground edge chain code Sequence chain-foreground, obtains background edge E with same methodbackgroundChain code sequence chain- Background, and by the initial position of every chain code and chain code length records in array t;
[3]. to EforegroundAnd EbackgroundMiddle non-zero value pixels carries out forward difference coding respectively, obtains the limit of correspondence Edge residual sequence residual-foreground and residual-background;
[4]. to chain-foreground and chain-background and residual-foreground and Residual-background carries out arithmetic coding, obtains binary bit stream bitstream-edge;
3rd step: the coding to depth map D sub pixel, comprises the following steps:
[1]. according to the sample template m × n set, original depth-map D is carried out uniform down-sampling, i.e. in m × n size In block of pixels, the sampling upper left corner the first pixel, obtains drawing of seeds as seedy;
[2]. by drawing of seeds as all pixels of seedy by row or column scanning be row vector seed, then seed is adopted Use forward prediction differential coding, obtain the residual sequence residual-seed of drawing of seeds picture;
[3]. residual-seed is carried out arithmetic coding, obtains bitstream-seed;
4th step: bitstream-edge and bitstream-seed and array t are transferred to lossless channel;
5th step: decoded portion, comprises the following steps:
[1]. seek the length ratio of bitstream-seed with bitstream-edge, be designated as k;
[2]. bit stream bitstream-edge and bitstream-seed received is carried out arithmetic decoding, is restored Foreground edge chain code sequence chain-foreground, background edge chain code sequence chain-background, foreground edge Residual sequence residual-foreground and background edge residual sequence residual-background, and drawing of seeds picture Residual sequence residual-seed;
[3]. to residual-foreground, residual-background and residual-seed of recovering Carry out foreground edge pixel value, background edge pixel value and drawing of seeds picture that forward prediction differential decoding is restored respectively Pixel value;Utilizing array t, the corresponding edge pixel value carrying out edge image according to the rule of chain code recovers, and finally obtains recovery Edge image Edecode
[4]. drawing of seeds picture by the drawing of seeds of row or column generated round (p/m) × round (q/n) size as seedy, Again seedy is carried out rarefaction representation, will seedy pixel assignment give a zero pixel image seed-sparse so that (i, j), and other position pixel values of seed-sparse are still zero to seed-sparse (m*i+1, n*j+1)=seedy, obtain One sparse drawing of seeds is as seed-sparse, and wherein round refers to the computing that rounds up, and (i j) represents location of pixels;
[5]. by EdecodeIn non-zero pixels be assigned to the pixel of seed-sparse same position, i.e. seed-sparse (i, j)=Edecode(i, j), (i j) represents the position of non-zero value pixels, obtains the sparse expression figure S=seed-of depth map sparse;
[6]. carry out depth map reconstruction:
(1) when k≤1, by nature nearest neighbour interpolation method, to the unknown pixel in S i.e. zero value pixels S~(x y) estimates Value:
1) build known pixels group P with the non-zero pixels point in S, and build Delaunay triangulation network corresponding to P, then to three In the net of angle, each edge of each triangle makees perpendicular bisector, perpendicular bisector it is polygon that the polygon surrounded forms original Tyson Shape;
2) each unknown pixel point is found outKnown neihbor poincts close SN(xi,yi), wherein i=1,2, 3......N, N is set SN(xi,yiThe number of pixel, S in)N(xi,yi)∈P;
3) to each unknown pixel point and its naturally neighbouring set SN(xi,yiN number of pixel S (x in)i,yi) carry out three again Angle subdivision obtains a triangulation network, asks for a new Thiessen polygon;
4) according to the limit of original Thiessen polygon, this new Thiessen polygon is divided into several region, according to region area Weight w than the natural abutment points calculating each unknown pixel pointi, the most each region area and the ratio of region gross area sum;
5) with formula (2) to each unknown pixel i.e. zero value pixels in SEstimate:
S ~ ( x , y ) = Σ i = 1 N w i S ( x i , y i ) ; - - - ( 2 )
(2) as k > 1, depth map reconstruction is carried out by the partial differential equation shown in formula (3):
{ S ~ L = 0 = S ; S ~ L = i ( x , y ) = 1 4 ( S ~ L = i - 1 ( x - 1 , y ) + S ~ L = i - 1 ( x + 1 , y ) + S ~ L = i - 1 ( x , y - 1 ) + S ~ L = i - 1 ( x , y + 1 ) ) ; - - - ( 3 )
Wherein L refers to iterations,Refer to unknown pixel, and stopping criterion for iteration is S ~ L = i ( x , y ) - S ~ L = i - 1 ( x , y ) ≤ 0.0001.
Finally from the point of view of optimization of rate, carry out bit shared by allocations of edge and seed, be shown experimentally that at low bit-rate In the case of, the bit number distributing to edge more than the bit number of seed, and should distribute the bit of seed in the case of high code check Number should be more than the bit number at edge, because in the case of high code check, is ensureing that basic close call, more overabsorption are to planting Son, then the depth value that just can control unknown pixel under ensureing edge lossless case changes in the least scope, it is ensured that After decoding, the depth value of smooth region and the respective value of artwork smooth region approximate as far as possible, thus reduce distortion.
Advantages of the present invention and good effect be:
1, the present invention uses threshold method to carry out rim detection, retains prospect background edge, before the edge detected includes Scape and background edge, be therefore different from traditional Single pixel edge, be more suitable for utilizing than Sobel edge detection algorithm simultaneously Partial differential equation carry out image reconstruction.
2 and standard coders JPEG2000, it is better to rebuild depth map edge quality, is more nearly artwork.Simultaneously will not Edge blurry artifact occurs.
3, considering from compression performance, compression ratio has exceeded JPEG2000, draws matter in the case of same code check simultaneously Amount is better than JPEG2000.
Accompanying drawing explanation
Fig. 1 is that the system of the present invention implements block diagram;
Fig. 2 is the edge that obtains of the edge detection method utilizing the inventive method to propose and drawing of seeds picture.
Fig. 3 is Fig. 2 partial enlarged drawing.
In Fig. 4, (a) is background edge image, and (b) is foreground edge image;
Fig. 5 is the distortion performance comparison diagram of the 35th frame depth map to Ballet sequence camera 3.
Fig. 6 is the distortion performance contrast of the 35th frame drafting figure to Ballet sequence camera 4, all uses at drawing process Unpressed reference view texture maps, carries out depth map by different coding method (including this patent method and JPEG2000) respectively Coding, the depth map of decoding and rebuilding draw out the texture maps of camera 4, use simplest DIBR (Depth Image Based Rendering) method for drafting, drawing process includes: 3D converts (3D Warping), medium filtering, hole-filling, Gauss low pass Filtering.
In Fig. 7, (c) is the texture maps of the cam4 that the depth map (bpp=0.1) according to present invention decoding is drawn out, (d) It it is the texture maps of the cam4 that the depth map (bpp=0.1) that JPEG2000 decodes is drawn out.
Fig. 8 is that natural neighbour interpolation calculates weights schematic diagram, and (e) is that the initial triangulated mesh of known pixels is with initial Thiessen polygon, (f) is the triangulated mesh after adding unknown pixel, and (g) is that the new Tyson after adding unknown pixel is polygon Shape, (h) is the cut zone formed after initial Thiessen polygon splits new Thiessen polygon.
Detailed description of the invention
The picture coding scheme proposing the present invention, we have done preliminary test experiments.Use Ballet sequence conduct Input picture.Assume to be transmitted under lossless channel.Using Asus's notebook computer to make coded treatment, notebook parameter is: Intel (R), Core (TM) i5CPU, 3230 ,@2.6GHz, 4.00GB internal memory.Software platform is MatlabR2012a, uses Matlab Programming with Pascal Language achieves depth map encoding scheme.
Fig. 1 gives flow chart of the present invention, it is characterised in that specifically comprise the following steps that
The first step: read in a width depth map D (resolution is 768 × 1024), sets a >=18 and m=9 and n=9;
Second step: the coding to depth map D edge, comprises the following steps
1. ask for foreground edge image EforegroundWith background edge image Ebackground:
According to edge detection threshold λ set1, to the current pixel D of depth map D (x, y) and neighbor (table respectively Being shown as D (x-1, y), D (x+1, y), D (x, y-1)) and D (x, y+1)) utilizes formula (1) to compare, and retains big pixel value Obtain foreground edge image Eforeground, retain little pixel value and obtain background edge image Ebackground, that is:
2. use chain code to describe EforegroundEach edge, and all chain codes are connected, obtain foreground edge chain code sequence Row chain-foreground, obtains background edge E with same methodbackgroundChain code sequence chain-background; By the initial position of every chain code and chain code length records in array t;
3. couple EforegroundAnd EbackgroundIn non-zero value pixels carry out forward difference coding respectively, obtain correspondence limit Edge residual sequence residual-foreground and residual-background;
4. couple chain-foreground and chain-background and residual-foreground and Residual-background carries out arithmetic coding, obtains binary bit stream bitstream-edge;
3rd step: the coding to depth map D sub pixel, comprises the following steps:
1., according to the sample template m × n set, original depth-map D is carried out uniform down-sampling (i.e. at the picture of m × n size The sampling upper left corner the first pixel in element block), obtain drawing of seeds as seedy;
2. by drawing of seeds as all pixels of seedy by row or column scanning be row vector seed, then to seed use Forward prediction differential coding, obtains the residual sequence residual-seed of drawing of seeds picture;
3. couple residual-seed carries out arithmetic coding, obtains bitstream-seed;
4th step: bitstream-edge and bitstream-seed and array t are transferred to lossless channel;
5th step: decoded portion, comprises the following steps:
1. seek the length ratio of bitstream-seed with bitstream-edge, be designated as k=0.5;
2. couple bit stream bitstream-edge and bitstream-seed received carries out arithmetic decoding, is restored Foreground edge chain code sequence chain-foreground, background edge chain code sequence chain-background, foreground edge are residual Difference sequence residual-foreground and background edge residual sequence residual-background, and drawing of seeds is as residual Difference sequence residual-seed;
3. couple residual-foreground, residual-background and residual-seed of recovering divide Do not carry out foreground edge pixel value, background edge pixel value and seed image slices that forward prediction differential decoding is restored Element value;Utilizing array t, the corresponding edge pixel value carrying out edge image according to the rule of chain code recovers, and finally obtains recovery Edge image Edecode
4. drawing of seeds picture by row or column (row or column reset mode is consistent with coding side) generated round (p/m) × round (q/n) drawing of seeds of size is as seedy, then seedy carry out rarefaction representation (will the pixel assignment of seedy to zero picture Sketch map is as seed-sparse so that seed-sparse (m*i+1, n*j+1)=seedy (i, j), (i, j) represents location of pixels, And other position pixel values of seed-sparse are still zero), obtain a sparse drawing of seeds as seed-sparse, wherein Round refers to the computing that rounds up;
5. by EdecodeIn non-zero pixels be assigned to seed-sparse same position pixel (i.e. seed-sparse (i, J)=Edecode(i j), (i j) represents the position of non-zero value pixels), obtains the sparse expression figure S=seed-of depth map sparse;
6. carry out depth map reconstruction:
Due to k=0.5 < 1, by following natural nearest neighbour interpolation method to the unknown pixel (i.e. zero value pixels) in SEnter Row valuation:
1) Delaunay triangulation network that known pixels group P is corresponding is built, then each edge to triangle each in the triangulation network Make perpendicular bisector, perpendicular bisector the polygon surrounded is Thiessen polygon;
2) neihbor poincts finding out unknown pixel point closes P (wherein S (xi, yi) ∈ P, N be set pixel number);
3) add unknown pixel point, unknown pixel point is carried out triangulation again with its natural adjacent pixels point and obtains triangle Net, finally asks for new Thiessen polygon;
4) according to the limit of old Thiessen polygon, new Thiessen polygon is divided into several region, counts according to region area ratio Calculate the weight w of each abutment pointsiThe most each region area and the ratio of region gross area sum;
5) with formula (2) to the unknown pixel (i.e. zero value pixels) in SCarry out valuation:
S ~ ( x , y ) = Σ i = 1 N w i S ( x i , y i ) ; - - - ( 2 )
Give an example and demonstrate natural neighbour interpolation method, comprise the following steps that and (assume that one pixel of an interpolation is the most here Fig. 8 (f) yellow pixel point, in figure below, a lattice represents a pixel):
1) Delaunay triangulation network corresponding to known pixels group P (such as RED sector in Fig. 8 (e)) (green such as 8 (e) is built Colo(u)r streak), then each edge of triangle each in the triangulation network is made perpendicular bisector, perpendicular bisector the polygon surrounded is Thiessen polygon (i.e. as shown in line blue in Fig. 8 (e));
2) neihbor poincts finding out unknown pixel point (such as yellow pixel point in Fig. 8 (f)) closes (as shown in Fig. 8 (f) Bottle green pixel);
3) unknown pixel point is added, to unknown pixel point with its natural adjacent pixels point (such as the yellow pixel in Fig. 8 (f) Point and bottle green pixel) carry out triangulation again and obtain the triangulation network (as shown in Fig. 8 (f) pink colour line), finally ask for new Tyson Polygon (as shown in Fig. 8 (g) brown line);
4) new Thiessen polygon is divided into four regions, according to region area ratio according to blue line corresponding in Fig. 8 (e) Calculate each abutment points (P1, P2, P3, P4) weights (as Fig. 8 (h), P1 pixel weight w 1 i.e. slash shaded area and The ratio of shade gross area sum, the weight w 2 i.e. grain of rice shaded area of P2 pixel and the ratio of shade gross area sum, P3 pixel The weight w 4 of weight w 3 i.e. whippletree shaded area and the ratio of shade gross area sum, P4 pixel i.e. erects thick stick shaded area and shade is total The ratio of area sum.)
5) with formula (3) to the unknown pixel (i.e. zero value pixels) in SCarry out valuation:
S ~ ( x , y ) = w 1 × S ( P 1 ) + w 2 × S ( P 2 ) + w 3 × S ( P 3 ) + w 4 × S ( P 4 ) - - - ( 3 )
In an experiment, we used Y-PSNR (PSNR) to estimate as the quality evaluation of experimental result.Fig. 2 is this The sparse expression depth map that the rim detection based on threshold value of invention proposition and down-sampling obtain.Fig. 3 is Fig. 2 Local map.Fig. 4 is The prospect background edge (Fig. 2 local) that the edge detection method using this patent to propose obtains.Fig. 5 is to Ballet sequence camera The distortion performance comparison diagram of the 35th frame depth map of 3, it can be seen that the PSNR value of the present invention program has substantially than JPEG2000 Improve, and the method in this patent can ensure that smooth region approximation recovers depth map limit in the case of lossless without distortions Edge.Fig. 6 is the distortion performance contrast of the 35th frame drafting figure to Ballet sequence camera 4, all uses at drawing process and does not presses The reference view texture maps of contracting, carries out depth map volume by different coding method (including this patent method and JPEG2000) respectively Code, the depth map of decoding and rebuilding draw out the texture maps of camera 4, use simplest DIBR (Depth Image Based Rendering) method for drafting, drawing process includes: 3D converts (3D Warping), medium filtering, hole-filling, Gauss low pass Filtering.Fig. 7 is texture maps and the JPEG2000 solution of the cam4 that the depth map (bpp=0.1) according to this patent decoding is drawn out The texture maps of the cam4 that the depth map (bpp=0.1) of code is drawn out, it can clearly be seen that JEPG2000 is this based on conversion side Method brings the edge concussion artifact of drafting figure, and the edge that this patent is drawn out is closer to real edge.Visible, this patent Scheme can significantly improve the virtual view quality recovered depth map quality and draw based on depth map.

Claims (1)

1. a depth map encoding method based on edge lossless compress, the substitutive characteristics that can effectively utilize depth map is compiled Code depth map, it is characterised in that operating procedure is:
The first step: reading in a width depth map D, its resolution is p × q, sets edge detection threshold as λ1, down-sampling rate is 1/ (m × n) i.e. sample in the image block of each m × n size in D pixel, wherein a λ1Span is λ1>=2, and m and n Span is respectively 2≤m≤16 and 2≤n≤16;
Second step: the coding to depth map D edge, comprises the following steps
[1]. ask for foreground edge image EforegroundWith background edge image Ebackground:
According to edge detection threshold λ set1, to the current pixel D of depth map D (x, y) and upper neighbor D (x, y-1), Lower neighbor D (x, y+1), left neighbor D (x-1, y), right neighbor D (x+1, y) utilizes formula (1) to compare, Retain big pixel value and obtain foreground edge image Eforeground, retain little pixel value and obtain background edge image Ebackground, That is:
[2]. use chain code to describe EforegroundEach edge, and all chain codes are connected, obtain foreground edge chain code sequence Chain-foreground, obtains background edge E with same methodbackgroundChain code sequence chain-background, and By the initial position of every chain code and chain code length records in array t;
[3]. to EforegroundAnd EbackgroundMiddle non-zero value pixels carries out forward difference coding respectively, and the edge obtaining correspondence is residual Difference sequence residual-foreground and residual-background;
[4]. to chain-foreground and chain-background and residual-foreground and Residual-background carries out arithmetic coding, obtains binary bit stream bitstream-edge;
3rd step: the coding to depth map D sub pixel, comprises the following steps:
[1]. according to the sample template m × n set, original depth-map D is carried out uniform down-sampling, i.e. in the pixel of m × n size In block, the sampling upper left corner the first pixel, obtains drawing of seeds as seedy;
[2]. by drawing of seeds as all pixels of seedy by row or column scanning be row vector seed, then to seed use before To prediction differential coding, obtain the residual sequence residual-seed of drawing of seeds picture;
[3]. residual-seed is carried out arithmetic coding, obtains bitstream-seed;
4th step: bitstream-edge and bitstream-seed and array t are transferred to lossless channel;
5th step: decoded portion, comprises the following steps:
[1]. seek the length ratio of bitstream-seed with bitstream-edge, be designated as k;
[2]. bit stream bitstream-edge and bitstream-seed received is carried out arithmetic decoding, before being restored Scape contour code sequence chain-foreground, background edge chain code sequence chain-background, foreground edge residual error Sequence residual-foreground and background edge residual sequence residual-background, and seed Image Residual Sequence residual-seed;
[3]. to residual-foreground, residual-background and the residual-seed recovered respectively Carry out foreground edge pixel value, background edge pixel value and seed image pixel that forward prediction differential decoding is restored Value;Utilizing array t, the corresponding edge pixel value carrying out edge image according to the rule of chain code recovers, and finally obtains the limit of recovery Edge image Edecode
[4]. drawing of seeds picture by the drawing of seeds of row or column generated round (p/m) × round (q/n) size as seedy, then Seedy carries out rarefaction representation, will seedy pixel assignment give a zero pixel image seed-sparse so that seed- (i, j), and other position pixel values of seed-sparse are still zero to sparse (m*i+1, n*j+1)=seedy, obtain one Sparse drawing of seeds is as seed-sparse, and wherein round refers to the computing that rounds up, and (i j) represents location of pixels;
[5]. by EdecodeIn non-zero pixels be assigned to the pixel of seed-sparse same position, i.e. seed-sparse (i, j) =Edecode(i, j), (i j) represents the position of non-zero value pixels, obtains the sparse expression figure S=seed-sparse of depth map;
[6]. carry out depth map reconstruction:
(1) when k≤1, by nature nearest neighbour interpolation method to the unknown pixel in S i.e. zero value pixelsCarry out valuation:
1) build known pixels group P with the non-zero pixels point in S, and build Delaunay triangulation network corresponding to P, then to the triangulation network In each edge of each triangle make perpendicular bisector, perpendicular bisector the polygon surrounded forms original Thiessen polygon;
2) each unknown pixel point is found outKnown neihbor poincts close SN(xi,yi), wherein i=1,2, 3......N, N is set SN(xi,yiThe number of pixel, S in)N(xi,yi)∈P;
3) to each unknown pixel point and its naturally neighbouring set SN(xi,yiN number of pixel S (x in)i,yi) carry out triangle again and cut open Get a triangulation network, ask for a new Thiessen polygon;
4) according to the limit of original Thiessen polygon, this new Thiessen polygon is divided into several region, according to region area than coming Calculate the weight w of the natural abutment points of each unknown pixel pointi, the most each region area and the ratio of region gross area sum;
5) with formula (2) to each unknown pixel i.e. zero value pixels in SEstimate:
(2) as k > 1, depth map reconstruction is carried out by the partial differential equation shown in formula (3):
Wherein L refers to iterations,Refer to unknown pixel, and stopping criterion for iteration is
CN201410008702.4A 2014-01-09 2014-01-09 Depth map encoding method based on edge lossless compress Active CN103763564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410008702.4A CN103763564B (en) 2014-01-09 2014-01-09 Depth map encoding method based on edge lossless compress

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410008702.4A CN103763564B (en) 2014-01-09 2014-01-09 Depth map encoding method based on edge lossless compress

Publications (2)

Publication Number Publication Date
CN103763564A CN103763564A (en) 2014-04-30
CN103763564B true CN103763564B (en) 2017-01-04

Family

ID=50530712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410008702.4A Active CN103763564B (en) 2014-01-09 2014-01-09 Depth map encoding method based on edge lossless compress

Country Status (1)

Country Link
CN (1) CN103763564B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9871967B2 (en) * 2015-01-22 2018-01-16 Huddly As Video transmission based on independently encoded background updates
CN104966293B (en) * 2015-06-15 2017-12-05 裴小根 Image detail feature guard method based on PG algorithms
TWI669942B (en) 2016-10-14 2019-08-21 聯發科技股份有限公司 Method and apparatus of smoothing filter for ringing artefact removal
CN108364255A (en) * 2018-01-16 2018-08-03 辽宁师范大学 Remote sensing image amplification method based on rarefaction representation and Partial Differential Equation Model
CN108961347B (en) * 2018-06-26 2020-09-01 北京大学 Two-dimensional target boundary expression method based on regular triangle mesh chain codes
CN114007059A (en) * 2020-07-28 2022-02-01 阿里巴巴集团控股有限公司 Video compression method, decompression method, device, electronic equipment and storage medium
CN113727105B (en) * 2021-09-08 2022-04-26 北京医百科技有限公司 Depth map compression method, device, system and storage medium

Also Published As

Publication number Publication date
CN103763564A (en) 2014-04-30

Similar Documents

Publication Publication Date Title
CN103763564B (en) Depth map encoding method based on edge lossless compress
JP7303992B2 (en) Mesh compression via point cloud representation
JP7012642B2 (en) Auxiliary data for artifact-aware view composition
KR102184261B1 (en) How to compress a point cloud
US9736455B2 (en) Method and apparatus for downscaling depth data for view plus depth data compression
US11508041B2 (en) Method and apparatus for reconstructing a point cloud representing a 3D object
Gautier et al. Efficient depth map compression based on lossless edge coding and diffusion
CN102801997B (en) Stereoscopic image compression method based on interest depth
Hervieu et al. Stereoscopic image inpainting: distinct depth maps and images inpainting
EP3777180B1 (en) A method and apparatus for encoding/decoding a point cloud representing a 3d object
CN103220542A (en) Image processing method and apparatus for generating disparity value
CN115512073A (en) Three-dimensional texture grid reconstruction method based on multi-stage training under differentiable rendering
TW202037169A (en) Method and apparatus of patch segmentation for video-based point cloud coding
Sazzad et al. Objective No‐Reference Stereoscopic Image Quality Prediction Based on 2D Image Features and Relative Disparity
CN103581650A (en) Method for converting binocular 3D video into multicast 3D video
CN103873867B (en) Free viewpoint video depth map distortion prediction method and free viewpoint video depth map coding method
CN104718755A (en) Device, program, and method for reducing data size of multiple images containing similar information, and data structure expressing multiple images containing similar information
CN109345444B (en) Super-resolution stereoscopic image construction method with enhanced depth perception
Yuan et al. Object shape approximation and contour adaptive depth image coding for virtual view synthesis
CN101610422A (en) Method for compressing three-dimensional image video sequence
Gelman et al. Layer-based sparse representation of multiview images
Sharma et al. A novel 3d-unet deep learning framework based on high-dimensional bilateral grid for edge consistent single image depth estimation
CN104982032A (en) Method and apparatus for segmentation of 3D image data
CN103997653A (en) Depth video encoding method based on edges and oriented toward virtual visual rendering
CN105096352A (en) Significance-driven depth image compression method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant