CN106060512A - Method for selecting and filling reasonable mapping points in virtual viewpoint synthesis - Google Patents

Method for selecting and filling reasonable mapping points in virtual viewpoint synthesis Download PDF

Info

Publication number
CN106060512A
CN106060512A CN201610486806.5A CN201610486806A CN106060512A CN 106060512 A CN106060512 A CN 106060512A CN 201610486806 A CN201610486806 A CN 201610486806A CN 106060512 A CN106060512 A CN 106060512A
Authority
CN
China
Prior art keywords
pixel
mapping
point
value
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610486806.5A
Other languages
Chinese (zh)
Other versions
CN106060512B (en
Inventor
喻莉
王伟健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201610486806.5A priority Critical patent/CN106060512B/en
Publication of CN106060512A publication Critical patent/CN106060512A/en
Application granted granted Critical
Publication of CN106060512B publication Critical patent/CN106060512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for selecting and filling reasonable mapping points in virtual viewpoint synthesis. The method comprises a coincidence judging and processing step, a defective pixel distinguishing and processing step and a crack hole filling step. In comparison with the existing viewpoint synthesis original technology, when the pixel points with the repeated mapping position are selected, the noise interference is eliminated, a background point and a foreground point can be effectively distinguished, and the accuracy of the mapped pixel points is remarkably improved; through effectively judging whether the defective pixel exists, the mapping accuracy is improved; when a crack hole generated by mapping is filled, the pixel point information with high correlation is introduced and the correlation of the foreground and the background is combined, so that the accuracy of the information used for filling the crack hole is improved. The method overcomes the defects of a mapping processing method in the existing 3D synthesis technology and provides the method for reliably selecting the mapping pixel points and filling the crack hole, so that the mapped image pixel points are accurate, and thus the obtained virtual viewpoint quality is further improved.

Description

One is chosen in virtual view synthesizes and fills up reasonable mapping point method
Technical field
The present invention designs 3D Video Composition field, when mapping more particularly, to virtual view 3-dimensional, about how Process coincidence mapping point, process independent mapping point and the method filling up mapping point crackle cavity.
Background technology
Along with science and technology and information technology and the development of network multimedia technology, people are to vision and audition matter The requirement of amount is more and more higher, in the urgent need to obtaining increasing image and video multimedia information, HD video and ultra high-definition Video starts gradually by common concern.People are also constantly changing for the cognition in the world simultaneously, and traditional two-dimensional visual is already People's requirement for three-dimensional body can not be met.3D technology all has broad application prospects at a lot of aspects, including medical science, work The aspect such as industry, military affairs has the biggest demand.Virtual view synthetic technology show as 3D in key technology, become video neck One of the study hotspot in territory.
In existing virtual view synthetic technology, method uses DIBR technology, the demapping section in DIBR technology, Use the mode of 3-dimensional Coordinate Conversion.Such as Fig. 1, by pixel coordinate system and image coordinate system the two coordinate system, by the world Coordinate system and pixel coordinate system set up matrix conversion relation, as shown in formula (1).
Z c u v 1 = f d x 0 u 0 0 f d y v 0 0 0 1 ( R , t ) X Y Z 1 = A ( R , t ) X Y Z 1 - - - ( 1 )
Wherein, [u, v, 1]TRepresent pixel coordinate system coordinate, [X, Y, Z, 1]TRepresent world coordinate system coordinate, [x, y]TFor Coordinate in image coordinate system;u0、v0Represent is the initial point of the image coordinate system position in pixel coordinate system, and f represents camera Focal length, R represents the spin matrix of camera, and t represents the translation matrix of camera, ZcThe representation space degree of depth.
What formula (1) completed is to the mapping of pixel coordinate system from world coordinate system, namely from world coordinate system to virtual The mapping of viewpoint, from former viewpoint to world coordinate system map time, use the inverse transformation of formula (1).Phase can be drawn by formula (1) Between the different coordinates answered, the matrix relationship of conversion, the most just can pass through Coordinate Conversion, from former viewpoint to virtual view Complete 3-dimensional Mapping and Converting.
In existing virtual view synthetic technology, first, the process for mapping point generation coincidence phenomenon only accounts for The size of spatial depth, direct deep angle value, retain the point that depth value is big, and point little for depth value is abandoned, do not account for Error likely can be there is to depth map.Because depth map exists error, make the different pixels point that depth value should be identical originally Depth value become different, if so the two point is coincidence point in the mapped, process time only consider depth value, just Error can be produced in selected point, cause the inaccurate of end product.
Second, after the pixel mapping of former viewpoint, the mapping position in virtual view is without pixel, but this There has been the pixel after mapping on pixel mapping position both sides after mapping, and existing method is directly to be thrown by this mapping point Shadow comes, and does not judges whether this point is bad point.If mapping point is bad point, then now will produce mapping error.
3rd, when front and back, twice mapping point completes to map, if the horizontal coordinate alternate position spike of twice mapping is less than current The crack width arranged during virtual view synthesis, then before and after the crackle cavity that exists between 2 is i.e. available, the value of 2 is carried out Fill up.Existing way is, fills up the value of current crackle cavity point only with the value of last pixel.So can neglect this The information of secondary mapping point, causes the disappearance of information, and end product is produced bigger error.
Summary of the invention
The technical problem to be solved is to overcome the defect of mapping treatment method in existing 3D synthetic technology, it is provided that One is reliably chosen mapping pixel and fills up crackle cavity method, so that the image slices vegetarian refreshments after Ying Sheing is accurate so that To virtual view quality improve further.
The one that the present invention proposes is chosen in virtual view synthesizes and fills up reasonable mapping point method, including walking as follows Rapid:
(1) to virtual view coordinate system, pixel is completed 3-dimensional from former eye coordinate to map, obtain in former visual point image Pixel mapping position in virtual view coordinate system;
(2) differentiate that this mapping position has existed pixel, be then by following improvement Z-buffer algorithm process, no Then go to step (3);
(2-1) differentiate that whether the absolute value of the difference mapping pixel and the former already present pixel depth value come is less than pre- If noise threshold N, be, think 2 coplanar, belong to coincidence, go to step (2-2);Otherwise it is assumed that the two point is different flat Point on face, can block the principle of background according to prospect, retains the Pixel Information that depth value is big, i.e. retains the pixel letter of prospect Breath, goes to step (5);
(2-2) ask for respectively the value of color of two points and surrounding neighbor pixel difference and, choose picture adjacent with surrounding The pixel that point is current mapping position that vegetarian refreshments average color value difference value is less, goes to step (5);
(3) mapping position is as follows without pixel process step:
(3-1) to the right and left pixel averaged, as shown in formula (2).
Davg=(Dl+Dr)/2 (2)
Wherein DavgRepresent the depth value average of the right and left picture element;Dl、DrIt is respectively the degree of depth of the right and left picture element Value;
If (3-2) meansigma methods D of the depth value D of current mapping point and the left and right pixel degree of depthAvgThe absolute value of difference big In threshold value M, and the difference of the left and right pixel degree of depth is less than threshold value M, then judge that current point, as bad point, now uses two, left and right pixel The mean information of point fills up the Pixel Information of current mapping position disappearance, goes to step (4);Otherwise turn (3-3)
If (3-3) meansigma methods D of the depth value D of the current pixel mapped and the left and right pixel degree of depthAvgDifference exhausted To value less than threshold value M, or the difference of the degree of depth of left and right pixel is more than threshold value M, and the current pixel mapped is determined as better, this Time with the Pixel Information reflecting incoming pixel from former visual point image fill up current mapping position disappearance Pixel Information;Turn Step (4);
(4) crackle hole-filling, including following sub-step:
(4-1) after mapping, calculate the amount Δ P=P of crackle cavity point every timei-Pi-1, PiRepresent the water of current mapping point Square to mapping position, Pi-1Represent the mapping position of the horizontal direction of a upper mapping point;
Differentiate that the amount Δ P of crackle cavity point whether less than or equal to crackle threshold value W, is to turn (4-2) and carry out crackle sky Hole is filled up;The most do not carry out crackle hole-filling, go to step (5);
(4-2) degree of depth to the pixel of twice mapping of front and back asks poor, it determines whether the absolute value of difference is less than threshold value M;It is Then think that the two point is same plane, fill up crackle hole information with this mean information of 2;Otherwise with in the two point The Pixel Information of the pixel being positioned at background fills up current crackle cavity;
(5) this is chosen and fills up mapping point and terminates.
Preferentially, step (2) described noise threshold N=(Dmax-Dmin) * 0.02, DmaxIt is deep that representation space depth value forms Depth value maximum in degree figure, DMinDepth value minimum in the depth map of representation space depth value composition;Described in step (3) Threshold value M=(Davg-DMin)*0.05;Described step (4) crackle threshold value W value is 2.N value is excessive, can expand the value of noise Scope, brings in noise by correct depth difference, introduces bigger error;N value is too small, can reduce the span of noise, By a part of noise as correct depth difference, reduce the effect of this algorithm.M value is excessive, can increase the redundancy of noise, fall The generation of low bad point rate, processes bad point in the way of correct pixel;M value is too small, can reduce the redundancy of noise, increases The generation rate of bad point, processes correct pixel in the way of bad point.W value is excessive, can be by the cavity in big region as splitting Stricture of vagina cavity processes, and reduces the accuracy of filling cavity;W value is too small, can be by the crackle cavity of zonule as big regional void Process, increase the workload of hole-filling.
In the present invention, described Pixel Information comprises the degree of depth and colour information.Described crackle cavity refers to value of color with deep Angle value is all 0.
In the present invention, image uses yuv format, and the colour picture of yuv format is divided into three passages: Y, U, V, wherein Y leads to Road representative luminance value, U, V passage represents chromatic value.
In general, compared with above technical scheme technology original with existing View Synthesis that the present invention is contemplated, Possess following advantage:
A., when choosing the pixel that mapping position repeats, it is possible to effective differentiation background dot and foreground point, it is pointed to phase Isoplanar pixel can have rational processing mode, also is able to effectively choose rational picture to the pixel of Different Plane Vegetarian refreshments, so significantly improves the accuracy of the pixel after mapping;
B., when mapping, it is whether that bad point has done effective judgement to mapping point, so, can be according to not when selected point Choose rational pixel with mapping position, improve the accuracy of mapping;
C. fill up map the crackle cavity produced time, introduce the highest pixel information of dependency, and reasonable utilization this A little information, in combination with the dependency of prospect background, improve the accuracy for filling up the empty information of crackle.
In view of the foregoing it is apparent that new method can effectively improve former methodical deficiency, it it is final synthesis result It is significantly improved.
Accompanying drawing explanation
Four kinds of coordinate system relations in three-dimensional coordinate conversion when Fig. 1 is virtual view synthesis;
Fig. 2 is the algorithm flow of this method;
Fig. 3 is the phenomenon analysis overlapped after different pixels point maps;
Fig. 4 is that the mapping position of mapping point does not has pixel, but there is the phenomenon analysis of pixel on both sides;
Fig. 5 be map every time after fill up the phenomenon analysis in crackle cavity.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearer, below in conjunction with accompanying drawing and case study on implementation, Be further elaborated explanation to the present invention.Should be appreciated that and described herein be embodied as case only in order to explain The present invention, is not intended to limit the present invention.Additionally, technology involved in each embodiment of invention described below Just can be combined with each other as long as feature does not constitutes conflict each other.
Below in conjunction with the accompanying drawings the present invention is further elaborated:
Provide the processing method for mapping point generation coincidence phenomenon as shown in Figure 2.The left side is certain in former viewpoint (x, y) (depth value is DepA, and colour information is ColA), this point, after three-dimensional maps, projects in virtual view pixel A (i, j) position.Because (i, (i, j) (depth value is DepV to the pixel V that before j) position has existed, pixel maps, colored Information is ColV), so occurring to map coincidence phenomenon.Now coincidence point is selected.
Selection course is as follows:
The first step, (x, y) with V (i, depth difference D j), wherein D=DepA-DepV to ask for A.
Second step, utilizes formula (3) to judge whether be in same plane at these 2.Wherein noise threshold N represents background prospect Depth difference.
M a r k - 1 &dtri; D < - N 1 &dtri; D > N 0 | &dtri; D | < N - - - ( 3 )
3rd step, carries out coincidence point selection.
When Mark is equal to-1, represent that (i, j) is foreground point to pixel V, and (x y) is background dot, now selects pixel A Pixel V (i, j) as current point, give up pixel A (x, y).
When Mark is equal to 1, represent that (x, y) is foreground point to pixel A, and (i j) is background dot, now selects pixel V Pixel A (x, y) as current point, give up pixel V (i, j).
When Mark is equal to 0, represent that (x, y) (i j) is in same plane to pixel A with pixel V.Now utilize formula (4) (x, y) (i, j) with the change degree C of the value of color of surrounding pixel point for V to obtain pixel A respectivelyAAnd CV.If CA< ▽CV, select pixel A (x, y) as current point, give up pixel V (i, j);If CA>▽CV, selection pixel V (i, J) as current point, give up pixel A (x, y).
&dtri; C = &Sigma; i = - 1 1 &Sigma; j = - 1 1 ( | A c ( x , y ) - V c ( x v + i , y v + j ) | ) - - - ( 4 )
As shown in Figure 3, it is provided that judge that whether mapping point is the method for bad point in virtual view.Topmost pixel point A (x, Y) it is the pixel in reference picture, after three-dimensional coordinate transformation, is mapped in virtual view (i, j) position.This position is not There is pixel, but (i-1, j) and (i+1, j) two positions exist pixel V (i-1, j) and V (i+1, j).Now utilize V (i-1, j) (i+1 j) judges (whether x is y) bad point to A with V.Judge process is as follows:
The first step, obtains pixel depth-averaged value D of adjacent position, left and rightavg, wherein Davg=(DV(i-1,j)+ DV(i+1,j))/2;
Second step, utilizes formula (5) to obtain threshold value M of prospect background depth difference;
M=(DAvg-DMin)*0.05 (5)
3rd step, utilizes formula (6) to judge whether current mapping point is bad point;
As pixel V, (i-1, j) (i+1, depth difference j) is less than threshold value M, and pixel A (x, depth value y) with V With DavgDifference more than threshold value M time, it is determined that (x, y) is bad point to pixel A, now flag bit sign is set to 1.
As pixel V, (i-1, j) (i+1, depth difference j) is more than threshold value M, or pixel A (x, depth value y) with V With DavgDifference less than threshold value M time, it is determined that (x y) is correct pixel, now flag bit sign is set to 0 pixel A.
4th step, utilizes formula (7) to obtain the Pixel Information of current mapping position.
V ( x , y ) = s i g n * ( V ( x l , y l ) + V ( x r , y r ) ) / 2 + ( 1 - s i g n ) * A ( x i , y i ) - - - ( 7 )
The method filling up crackle cavity is provided as shown in Figure 4.Two continuous print pixel A in former viewpoint (x-1, y) and A (x, y) after three-dimensional coordinate transformation, be respectively mapped in virtual view (i-1, j) and (i+1, j) two positions, interposition Put that (i, j) does not has pixel, now it is carried out crackle hole-filling.Complementing method is as follows:
The first step, obtains the poor Δ P of twice mapping position of before and after by formula (8);
Δ P=Pi+1-Pi (8)
Second step, it is judged that whether Δ P is less than crackle threshold width W;If less than equal to W, then carry out crackle hole-filling, This algorithm is not the most carried out if greater than W;
3rd step, Δ P is equal to 1 here, and less than W, (x-1, y) (whether x y) is in pixel A with A to utilize formula (9) to judge Same level.If 2 depth difference are less than threshold value M, then mark is set to 1, if 2 depth difference are more than threshold value M, then will Mark is set to 0.
M a r k = 1 , | D A ( x - 1 , y ) - D A ( x , y ) | < M 0 , | D A ( x - 1 , y ) - D A ( x , y ) | > M - - - ( 9 )
4th step, if (x-1, y) (x, y) is in same level to pixel A, then utilize the two pixel with A Mean information fills up crackle cavity;If (x-1, y) (x, y) is not on same level to pixel A, then utilizes and is in A The pixel of background dot fills up crackle cavity.As shown in formula (10).
V ( i , j ) = V ( x , y - 1 ) + V ( x , y + 1 ) 2 M a r k = 1 min d e p t h ( A ( x - 1 , y ) , A ( x , y ) ) M a r k = 0 - - - ( 10 )
Noise threshold N=(D in the present embodimentmax-Dmin) * 0.02, DmaxIn the depth map of representation space depth value composition Big depth value, DMinDepth value minimum in the depth map of representation space depth value composition;Crackle threshold value W value is 2.
Pass through above method, it is possible to rationally effective background dot and the foreground point distinguished, and choose suitable pixel and fill up The information of current point, thus improve efficiency and the accuracy rate of three-dimensional coordinate conversion, make the quality of the virtual view figure finally synthesized Compare former method more preferable.
As it will be easily appreciated by one skilled in the art that the special circumstances that the foregoing is only the present invention, not to limit this Bright, all any amendment, equivalent and improvement etc. made within the spirit and principles in the present invention, should be included in the present invention Protection domain within.

Claims (5)

1. choose in virtual view synthesizes and fill up reasonable mapping point method for one kind, it is characterised in that comprising the steps:
(1) pixel is completed three-dimensional mapping from former eye coordinate to virtual view coordinate system, obtain picture in former visual point image Vegetarian refreshments mapping position in virtual view coordinate system;
(2) differentiate that this mapping position has existed pixel, be then to exist to map to overlap, calculate by following improvement Z-buffer Method processes;Otherwise go to step (3);
(2-1) differentiate that the absolute value of the difference mapping the pixel and the former already present pixel depth value that come is whether less than presetting Noise threshold N, be, think 2 coplanar, belong to coincidence, go to step (2-2);Otherwise it is assumed that the two point is in Different Plane Point, can block the principle of background according to prospect, retain the Pixel Information that depth value is big, i.e. make by the Pixel Information being positioned at prospect For mapping point Pixel Information, go to step (5);
(2-2) difference of pixel and the former already present pixel with the value of color of surrounding neighbor pixel mapping is asked for respectively Sum, choose the pixel that point be current mapping position less with surrounding neighbor pixel average color value difference value, go to step (5);
(3) as follows without mapping coincidence process step:
(3-1) to the right and left pixel averaged, Davg=(Dl+Dr)/2, wherein DavgRepresent the right and left picture element Depth value average;Dl、DrIt is respectively the depth value of the right and left picture element;
If (3-2) depth value D and D of current mapping pointavgThe absolute value of difference more than threshold value M, and the left and right pixel degree of depth it Difference is less than threshold value M, then judge that current point, as bad point, is now filled out with the mean information of these two pixels in left and right mapped Mend the Pixel Information of current mapping position disappearance, go to step (4);Otherwise turn (3-3);
If (3-3) meansigma methods D of the depth value D of the current pixel mapped and the left and right pixel degree of depthavgThe absolute value of difference Less than threshold value M, or the difference of the degree of depth of left and right pixel is more than threshold value M, and the current pixel mapped is determined as better, now uses The Pixel Information reflecting incoming pixel from former visual point image fills up the Pixel Information of current mapping position disappearance;Go to step (4);
(4) crackle hole-filling, including following sub-step:
(4-1) after mapping, calculate the amount Δ P=P of crackle cavity point every timei-Pi-1, PiRepresent the level side of current mapping point To mapping position, Pi-1Represent the mapping position of the horizontal direction of last mapping point;
Differentiate that the amount Δ P of crackle cavity point, whether less than or equal to crackle threshold value W, is, turn (4-2) and carry out crackle cavity and fill out Mend;The most do not carry out crackle hole-filling, go to step (5);
(4-2) spatial depth to the pixel of twice mapping of front and back asks poor, it determines whether the absolute value of difference is less than threshold value M;It is Then think that the two point is same plane, with the equal value complement crackle cavity of these 2 Pixel Information;Otherwise with position in the two point Pixel Information in the pixel of background fills up current crackle cavity;
(5) this is chosen and fills up mapping point and terminates.
2. mapping point method as claimed in claim 1, it is characterised in that described Pixel Information comprises the degree of depth and colour information.
3. mapping point method as claimed in claim 1, it is characterised in that step (2) described noise threshold N=(Dmax-Dmin)* 0.02, DmaxDepth value maximum in the depth map of representation space depth value composition, DMinThe degree of depth of representation space depth value composition Depth value minimum in figure.
4. mapping point method as claimed in claim 1, it is characterised in that threshold value M=(D described in step (3)avg-DMin)* 0.05。
5. mapping point method as claimed in claim 1, it is characterised in that described step (4) crackle threshold value W value is 2.
CN201610486806.5A 2016-06-28 2016-06-28 It is a kind of to choose and fill up rationally mapping point methods in virtual view synthesis Active CN106060512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610486806.5A CN106060512B (en) 2016-06-28 2016-06-28 It is a kind of to choose and fill up rationally mapping point methods in virtual view synthesis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610486806.5A CN106060512B (en) 2016-06-28 2016-06-28 It is a kind of to choose and fill up rationally mapping point methods in virtual view synthesis

Publications (2)

Publication Number Publication Date
CN106060512A true CN106060512A (en) 2016-10-26
CN106060512B CN106060512B (en) 2017-08-01

Family

ID=57167154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610486806.5A Active CN106060512B (en) 2016-06-28 2016-06-28 It is a kind of to choose and fill up rationally mapping point methods in virtual view synthesis

Country Status (1)

Country Link
CN (1) CN106060512B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769662A (en) * 2018-07-03 2018-11-06 京东方科技集团股份有限公司 A kind of multiple views bore hole 3D rendering hole-filling method, apparatus and electronic equipment
CN117893450A (en) * 2024-03-15 2024-04-16 西南石油大学 Digital pathological image enhancement method, device and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102325259A (en) * 2011-09-09 2012-01-18 青岛海信数字多媒体技术国家重点实验室有限公司 Method and device for synthesizing virtual viewpoints in multi-viewpoint video
KR20130067474A (en) * 2011-12-14 2013-06-24 연세대학교 산학협력단 Hole filling method and apparatus
CN103337081A (en) * 2013-07-12 2013-10-02 南京大学 Shading judgment method and device based on depth layer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102325259A (en) * 2011-09-09 2012-01-18 青岛海信数字多媒体技术国家重点实验室有限公司 Method and device for synthesizing virtual viewpoints in multi-viewpoint video
KR20130067474A (en) * 2011-12-14 2013-06-24 연세대학교 산학협력단 Hole filling method and apparatus
CN103337081A (en) * 2013-07-12 2013-10-02 南京大学 Shading judgment method and device based on depth layer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁焱: "基于深度图的虚拟视点绘制中空洞填补技术研究", 《硕士学位论文全文数据库》 *
汪敬嫒: "基于深度图像的虚拟视点绘制算法研究", 《硕士学位论文全文数据库》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769662A (en) * 2018-07-03 2018-11-06 京东方科技集团股份有限公司 A kind of multiple views bore hole 3D rendering hole-filling method, apparatus and electronic equipment
CN108769662B (en) * 2018-07-03 2020-01-07 京东方科技集团股份有限公司 Multi-view naked eye 3D image hole filling method and device and electronic equipment
US11043152B2 (en) 2018-07-03 2021-06-22 Boe Technology Group Co., Ltd. Method and apparatus for filling holes in naked-eye 3D multi-viewpoint image, and electronic device for performing the method
CN117893450A (en) * 2024-03-15 2024-04-16 西南石油大学 Digital pathological image enhancement method, device and equipment
CN117893450B (en) * 2024-03-15 2024-05-24 西南石油大学 Digital pathological image enhancement method, device and equipment

Also Published As

Publication number Publication date
CN106060512B (en) 2017-08-01

Similar Documents

Publication Publication Date Title
CN104780355B (en) Empty restorative procedure based on the degree of depth in a kind of View Synthesis
CN104616286B (en) Quick semi-automatic multi views depth restorative procedure
CN103024421B (en) Method for synthesizing virtual viewpoints in free viewpoint television
CN101556700B (en) Method for drawing virtual view image
CN104376535A (en) Rapid image repairing method based on sample
CN104079914B (en) Based on the multi-view image ultra-resolution method of depth information
CN103914820B (en) Image haze removal method and system based on image layer enhancement
CN103414909B (en) A kind of hole-filling method being applied to dimensional video virtual viewpoint synthesis
CN102930593B (en) Based on the real-time drawing method of GPU in a kind of biocular systems
CN104837000B (en) The virtual visual point synthesizing method that a kind of utilization profile is perceived
CN104850847B (en) Image optimization system and method with automatic thin face function
CN103384343B (en) A kind of method and device thereof filling up image cavity
CN102890785A (en) Method for service robot to recognize and locate target
CN104822059B (en) A kind of virtual visual point synthesizing method accelerated based on GPU
CN106060509B (en) Introduce the free view-point image combining method of color correction
CN102609974A (en) Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN111047709A (en) Binocular vision naked eye 3D image generation method
CN106600632A (en) Improved matching cost aggregation stereo matching algorithm
CN107240073A (en) A kind of 3 d video images restorative procedure merged based on gradient with clustering
CN107909079A (en) One kind collaboration conspicuousness detection method
CN106028020B (en) A kind of virtual perspective image cavity complementing method based on multi-direction prediction
CN106060512A (en) Method for selecting and filling reasonable mapping points in virtual viewpoint synthesis
CN103761766A (en) Three-dimensional object model texture mapping algorithm based on tone mapping and image smoothing
CN103945209B (en) A kind of DIBR method based on piecemeal projection
CN104661014B (en) The gap filling method that space-time combines

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant