CN104052985B - Three-dimensional image correction device and three-dimensional image correction method - Google Patents
Three-dimensional image correction device and three-dimensional image correction method Download PDFInfo
- Publication number
- CN104052985B CN104052985B CN201410312022.1A CN201410312022A CN104052985B CN 104052985 B CN104052985 B CN 104052985B CN 201410312022 A CN201410312022 A CN 201410312022A CN 104052985 B CN104052985 B CN 104052985B
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional image
- imagery zone
- pixel
- difference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims description 18
- 238000003702 image correction Methods 0.000 title abstract 3
- 238000001514 detection method Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 238000004148 unit process Methods 0.000 description 2
- 238000013019 agitation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The invention relates to a three-dimensional image correction device, which comprises a depth generator, a depth image plotter, an image processor and an image corrector. The depth generator is used for receiving an input image and processing the input image to generate a depth map. The depth image plotter is used for generating a three-dimensional image according to the input image and the depth map. The image processor is used for detecting a pixel difference region of the depth image. The image processor correspondingly detects a first image area in the input image according to the pixel difference area, and correspondingly detects a second image area in the three-dimensional image according to the pixel difference area. The image processor is used for judging whether the first image area only comprises a single object. The image corrector is used for correcting the second image area in the three-dimensional image when the first image area only contains a single object.
Description
Technical field
The present invention is related to a kind of image processing system and image treatment method, and in particular to a kind of 3-dimensional image
Correcting unit and 3-dimensional image bearing calibration.
Background technology
Along with the progress of science and technology, the play mode of image is play mould by traditional two dimension (Two Dimension, 2D) image
Formula, progressively proceeds to three-dimensional (Three Dimension, 3D) image play mode.Especially at A Fanda movie show in 2009
After, more leading one 3D image agitation, the manufacturer of display industries releases 3D display one after another, to meet consumer demand.
Owing to 3D technology is the most universal, 3D signal source thus more lack.For solving this problem, at present can be by multiple shadows
As or individual image as input source, 3D display utilize its interior degree of depth generator so that above-mentioned input source to be processed and
Produce depth map, then by degree of depth drawing image device (Depth image based rendering, the DIBR) basis of 3D display
Depth map produces 3D image.
But, above-mentioned degree of depth generator, when processing input source, can produce some situations, for example, identical
The each several part of object the most all can have an identical depth information, only degree of depth generator sometimes by same item in Vertical Square
Upwards interpretation goes out two kinds of different depth informations, and produces the depth map of mistake.Once degree of depth drawing image device is according to mistake
Depth map produces 3D image, and the most above-mentioned object can produce sawtooth (3D Jagged) phenomenon, has a strong impact on the product of 3D display
Matter.
As can be seen here, above-mentioned existing mode, it is clear that still suffer from inconvenience and defect, and have much room for improvement.Above-mentioned in order to solve
Problem, association area is sought solution the most painstakingly, but is not developed suitable solution the most yet.
Summary of the invention
Summary of the invention aims to provide the simplification summary of present invention, so that present invention is possessed basic by reader
Understand.The complete overview of this summary of the invention not present invention, and its be not intended to point out the embodiment of the present invention important/
Key element or define the scope of the present invention.
One of present invention purpose is to provide a kind of 3-dimensional image correcting unit and 3-dimensional image bearing calibration, using
Improve and be present in the problems of the prior art.
For reaching above-mentioned purpose, a technology aspect of present invention is about a kind of 3-dimensional image correcting unit.This three-dimensional shadow
As correcting unit comprises degree of depth generator, degree of depth drawing image device, image processor and adjustment of image device.In operation, deeply
Degree generator is in order to receive input image, and processes to produce depth map to input image.Degree of depth drawing image device in order to
According to inputting image and depth map to produce 3-dimensional image.Image processor is in order to detect the pixel difference region of depth map, wherein
Image processor according to pixel difference region correspondingly to detect the first imagery zone in input image, and according to pixel difference region
Correspondingly the second imagery zone in detection 3-dimensional image, wherein image processor is in order to the most only to judge in the first imagery zone
Comprise single object.When adjustment of image device is used to only comprise single object in the first imagery zone, in 3-dimensional image
Two imagery zones are corrected.
According to one embodiment of the invention, above-mentioned image processor is also poor in order to calculate the first pixel in the first imagery zone
Value, and compare the first pixel value difference and the first pre-set threshold value, and when the first pixel value difference is less than the first pre-set threshold value, it is determined that the
One imagery zone only comprises single object.
According to yet another embodiment of the invention, above-mentioned first pixel value difference is the luminance difference in the first imagery zone or chroma
Difference.
According to another embodiment of the present invention, above-mentioned image processor is also poor in order to calculate the second pixel in pixel difference region
Value, and compare the second pixel value difference and the second pre-set threshold value, wherein adjustment of image device only wraps in being also used to the first imagery zone
When being more than the second pre-set threshold value containing single object and the second pixel value difference, the second imagery zone in 3-dimensional image is carried out school
Just.
According to further embodiment of this invention, above-mentioned second pixel value difference is the GTG difference in pixel difference region.
According to yet another embodiment of the invention, above-mentioned adjustment of image device is in order to carry out the second imagery zone in 3-dimensional image
Smoothing techniques.
According to further embodiment of this invention, above-mentioned image processor also comprises an edge detecting device in order to pixel difference district
Territory carries out rim detection to obtain marginal area, wherein image processor also in order to according to marginal area correspondingly to obtain input
The first imagery zone in image, and the second imagery zone in 3-dimensional image is correspondingly obtained according to marginal area.
For reaching above-mentioned purpose, another technology aspect of present invention is about a kind of 3-dimensional image bearing calibration.This is three-dimensional
Image correcting method comprises the steps of reception input image, and processes to produce depth map to input image;According to defeated
Enter image and depth map to produce 3-dimensional image;The pixel difference region of detection depth map;According to pixel difference region correspondingly to take
The first imagery zone in image must be inputted, and correspondingly obtain the second image area in 3-dimensional image according to pixel difference region
Territory;Judge the first imagery zone the most only comprises single object;And when only comprising single object in the first imagery zone,
The second imagery zone in 3-dimensional image is corrected.
According to one embodiment of the invention, the above-mentioned step judging the most only to comprise single object in the first imagery zone, bag
Contain: the first pixel value difference calculating in the first imagery zone;Relatively the first pixel value difference and the first pre-set threshold value;And in first
When pixel value difference is less than the first pre-set threshold value, it is determined that the first imagery zone only comprises single object.
According to yet another embodiment of the invention, above-mentioned first pixel value difference is the luminance difference in the first imagery zone or chroma
Difference.
According to another embodiment of the present invention, above-mentioned 3-dimensional image bearing calibration also comprises the steps of calculating pixel difference district
The second pixel value difference in territory;And compare the second pixel value difference and the second pre-set threshold value;Wherein in the first imagery zone only
When comprising single object, the step being corrected the second imagery zone in 3-dimensional image comprises: in the first imagery zone
When only comprising single object and the second pixel value difference more than the second pre-set threshold value, the second imagery zone in 3-dimensional image is carried out
Correction.
According to further embodiment of this invention, above-mentioned second pixel value difference is the GTG difference in pixel difference region.
According to yet another embodiment of the invention, the above-mentioned step that the second imagery zone in 3-dimensional image is corrected, bag
Contain: the second imagery zone in 3-dimensional image is carried out smoothing techniques.
According to further embodiment of this invention, the step in the pixel difference region of above-mentioned detection depth map comprises: to pixel difference district
Territory carries out rim detection to obtain marginal area;Wherein according to pixel difference region correspondingly to obtain the first shadow in input image
As region, and correspondingly obtain the step of the second imagery zone in 3-dimensional image according to pixel difference region and comprise: according to edge
Region is correspondingly to obtain the first imagery zone in input image, and correspondingly obtains in 3-dimensional image according to marginal area
Second imagery zone.
Therefore, according to the technology contents of the present invention, the embodiment of the present invention by propose a kind of 3-dimensional image correcting unit and
3-dimensional image bearing calibration, uses 3-dimensional image correcting unit and the mechanism of 3-dimensional image bearing calibration, it is possible to degree of depth image
The 3D image that diagraph has crenellated phenomena according to the depth map and producing of mistake is corrected, accordingly ensure that the product of 3D display
Matter.
After refering to following description, persond having ordinary knowledge in the technical field of the present invention is when will readily appreciate that this
Invention essence spirit and other goals of the invention, and the technology used in the present invention means with implement aspect.
Accompanying drawing explanation
For the above and other purpose of the present invention, feature, advantage can be become apparent with embodiment, saying of institute's accompanying drawings
Bright as follows:
Fig. 1 illustrates the schematic diagram of a kind of 3-dimensional image correcting unit according to one embodiment of the invention;
Fig. 2 illustrates a kind of 3-dimensional image bearing calibration flow chart according to a further embodiment of the present invention;
Fig. 3 A illustrates a kind of image schematic diagram with crenellated phenomena;
Fig. 3 B illustrates a kind of image schematic diagram after 3-dimensional image correcting unit processes according to one embodiment of the invention;
Fig. 3 C illustrates a kind of image signal after 3-dimensional image correcting unit processes according to another embodiment of the present invention
Figure;
Fig. 4 illustrates the schematic diagram of a kind of 3-dimensional image correcting unit according to further embodiment of this invention.
Wherein, reference:
100,100a: 3-dimensional image correcting unit 510: the first imagery zone
110: degree of depth generator 520: the first imagery zone
120: degree of depth drawing image device 600: depth map
130: image processor 610: pixel difference region
132: edge detecting device 620: marginal area
140: adjustment of image device 700:3D image
200: method 710: the second imagery zone
210~260: step 720: second imagery zone
500: input image
According to usual operating type, in figure, various features are not drawn to scale with element, its drafting mode be in order to
Present specific features related to the present invention and element in optimal manner.Additionally, at different graphic, with same or analogous
Component symbol censures similar elements/components.
Detailed description of the invention
In order to the narration making present invention is more detailed with complete, below for the enforcement aspect of the present invention with concrete
Embodiment proposes illustrative description;But this not implements or uses the unique forms of the specific embodiment of the invention.Embodiment party
Formula covers multiple specific embodiment feature and in order to construction with operate the method step of these specific embodiments and its
Sequentially.But, other specific embodiments also can be utilized to reach identical or impartial function and sequence of steps.
Unless this specification is defined otherwise, technology neck belonging to the implication of science and technology vocabulary used herein and the present invention
Territory has usually intellectual understood and usual same meaning.Additionally, when getting along well context conflict, this explanation
Singular noun used by book contains the complex number type of this noun;And during plural noun used, also contain the odd number type of this noun.
It addition, about " coupling " used herein, can refer to that two or multiple elements are the most directly made entity or are electrically connected with
Touch, or mutually indirectly put into effect body or in electrical contact, be also referred to as two or multiple element mutual operation or actions.
Three-dimensional (Three is caused for solving the wrong depth map that produces according to degree of depth generator of degree of depth drawing image device
Dimension, 3D) there is the problem of crenellated phenomena in image, the embodiment of the present invention proposes a kind of 3-dimensional image correcting unit, uses
Sawtooth (3D Jagged) phenomenon of above-mentioned 3D image is corrected.
Above-mentioned 3-dimensional image correcting unit is illustrated in Fig. 1 with the pattern of circuit box, is beneficial to understand the present invention.Such as figure
Shown in, 3-dimensional image correcting unit 100 comprises degree of depth generator 110, degree of depth drawing image device (Depth image based
Rendering, DIBR) 120, image processor 130 and adjustment of image device 140.In structure, degree of depth generator 110 is electrical
Being coupled to degree of depth drawing image device 120 and image processor 130, degree of depth drawing image device 120 is electrically coupled to image processor
130 and adjustment of image device 140, and image processor 130 is electrically coupled to adjustment of image device 140.So the present invention is not with Fig. 1 institute
The circuit box shown is limited, and it is only in order to one of implementation that the present invention is described illustratively.
For making the correction mechanism of the 3-dimensional image correcting unit 100 shown in Fig. 1 it can be readily appreciated that shown in following cooperation Fig. 2
The steps flow chart of 3-dimensional image bearing calibration 200 is to illustrate.
First, as indicated in step 210, degree of depth generator 110 receive input image 500, and input image 500 is entered
Row processes to produce depth map 600.Above-mentioned input image 500 can be multiple images or individual image, and this input image 500 comprises
Many image informations, after the image information of above-mentioned input image 500 is processed by degree of depth generator 110, produce corresponding many
Individual depth information, these depth informations constitute the depth map 600 shown in Fig. 1 first half.
Subsequently, in step 220, by degree of depth drawing image device 120 according to inputting image 500 and depth map 600 to produce
Left-eye images and right-eye image, be used for synthesizing 3D image 700.In prior art, degree of depth generator 110 may produce mistake
Depth map 600, if degree of depth drawing image device 120 according to the depth map 600 of mistake to produce left-eye images or right-eye image, then
The 3D image 700 being finally synthesizing has crenellated phenomena, and therefore, the 3-dimensional image correcting unit 100 of this case is also equipped with following correction
Mechanism and the problems referred to above can be solved, illustrate after such as.
Refer to step 230, utilize image processor 130 to the pixel difference region 610 detecting in depth map 600, this picture
Element difference region 610 can comprise positional information, the such as information in the lower right corner during pixel difference region 610 is positioned at depth map 600.
Then, as illustrated in step 240, by image processor 130 according to pixel difference region 610 correspondingly to detect Fig. 1
Shown the first imagery zone 510 in input image 500, and 3D shadow shown in Fig. 1 is correspondingly detected according to pixel difference region 610
The second imagery zone 710 in the left-eye images of picture 700 or right-eye image.It should be noted that, depth map 600 is substantially by counting
Calculating the input depth information of image 500 and obtain, therefore, depth map 600 is reflected to input the deep of respective regions in image 500
Degree information.Referring to Fig. 1 first half, the pixel difference region 610 of depth map 600 lower right-most portion substantially reflects input image
The depth information of the first imagery zone 510 of 500 lower right-most portion, then, as it can be seen, the first imagery zone 510 only has list
One object, degree of depth generator 110 but interpretation mistake and there is pixel difference region 610, now in part corresponding in depth map 600
Degree of depth drawing image device 120 is according to the depth map 600 of mistake, and produces in the second imagery zone 710 of 3D image 700 correspondence
Crenellated phenomena.
In summary, although input image 500 only has single object, but because of degree of depth generator 110 interpretation mistake,
Produce the 3D image 700 with crenellated phenomena eventually.So, for microcosmic so that single object looks like in 3D image 700
Two objects, for macroscopic so that single object seems that in 3D image 700 distortion, deformation lose single object originally
Kenel, have a strong impact on the quality of 3D image 700.For solving this problem, the 3-dimensional image correcting unit of the embodiment of the present invention
100 interpretation can go out the region of occurred pixel difference in depth map 600, and are corrected region corresponding in 3D image 700
Solve the problems referred to above.
But, furthermore, if the region of occurred pixel difference is carried out adjustment of image process, this kind of processing mode
Would potentially result in that 3D image 700 is crossed multizone to be corrected so that 3D image 700 obfuscation, this part refers to 3A and 3B
Figure is to explain.Fig. 3 A illustrates a kind of 3D image schematic diagram with crenellated phenomena, as it can be seen, English alphabet F A in the drawings
There is at Dian crenellated phenomena, this problem can be solved via above-mentioned processing mode.3D image produced by above-mentioned processing mode
Schematic diagram refer to Fig. 3 B, Fig. 3 B found out the 3-dimensional image correcting unit 100 of the embodiment of the present invention, in 3D image
Cross multizone to be corrected, cause whole 3D image the fuzzyyest, accordingly, though above-mentioned processing mode can solve asking of crenellated phenomena
Topic, such as English alphabet F is without obvious crenellated phenomena at the B point of Fig. 3 B, but the 3D image of obfuscation can affect equally and make
The perception of user.
For solving the problem of 3D image fog, refer to step 250, image processor 130 judge the first image
Region 510 the most only comprises single object, as shown in Fig. 1 first half, if the first imagery zone 510 only comprises single thing
Part, now performs step 260, adjustment of image device 140 is corrected the second imagery zone 710 in 3D image 700.At this
In embodiment, the correcting mode of above-mentioned adjustment of image device 140, for example, for the second imagery zone in 3D image 700
710 carry out smoothing techniques, to improve the crenellated phenomena of 3D image 700.So, 3-dimensional image correcting unit 100 is only to really
Problematic region is corrected, rather than simply to occurred pixel difference corresponding 3D imagery zone be corrected, in order to
Under maintaining the situation of sharpness of 3D image, the problem solving the crenellated phenomena of 3D image.
The result of above-mentioned processing mode refers to Fig. 3 C, as it can be seen, owing to the 3-dimensional image of the embodiment of the present invention corrects
Real problematic region is only corrected by device 100, such as, be corrected only at the C point of Fig. 3 C, therefore, and English words
Female F without obvious crenellated phenomena, and maintains the sharpness of 3D image at the C point of Fig. 3 C, and such as English alphabet e is at Fig. 3 A
It is close with the sharpness in Fig. 3 C, additionally, English alphabet e is higher than the English alphabet e sharpness at Fig. 3 B at the sharpness of Fig. 3 C.
The above-mentioned step judging the most only to comprise single object in the first imagery zone 510, specifically, by image processing
Device 130 calculates the pixel value difference in the first imagery zone 510, and compared pixels difference and pre-set threshold value, and little in pixel value difference
When pre-set threshold value, it is determined that the first imagery zone 510 only comprises single object.For example, image processor 130 calculates
In one imagery zone 510, the pixel in vertical direction is poor, and such as the pixel value difference in Y-axis, the computing formula after concluding is such as
Shown in lower:
Δ I=| and I (x, y)-I (x, y ± 1) | ... formula 1
As above formula 1, Δ I is pixel value difference, I (x, y) on the basis of the pixel value of point, and on the basis of I (x, y ± 1), point is toward Y
Axle up or down moves the pixel value of a unit.After pixel value difference Δ I in obtaining the first imagery zone 510, by pixel
Difference DELTA I compares with pre-set threshold value, if pixel value difference Δ I is less than pre-set threshold value, represents the shadow in the first imagery zone 510
The margin of image element of picture is away from not quite, thus judges to only have in the first imagery zone 510 single object.In the present embodiment, above-mentioned pre-
If depending on threshold values can be according to actual demand.Additionally, the present invention is not limited with above-described embodiment, it is only in order to illustrate illustratively
One of implementation of the present invention.
Above-mentioned judgment mode is may be for same colour system based on single object, and the pixel difference of the most single object should be less
Principle, but the present invention is not limited thereto, and the most also can consider other factors, comprehensive such as color and brightness
Information, or remaining be available for judge factor, to judge the first imagery zone 510 the most only comprises single object.
Additionally, in above-mentioned steps 250, if image processor 130 judges not only to comprise in the first imagery zone 510 single
Object, such as when the first imagery zone 510 comprises two articles or more object, then perform step 210.The most then refer to
State step 260, after once having performed correction program, then continue executing with step 210.It should be noted that, the 3-dimensional image shown in Fig. 2
The step of bearing calibration 200 does not limit and is performed by the element of 3-dimensional image correcting unit 100, and above example is only in order to say
One of implementation of the bright present invention, the scope of the 3-dimensional image bearing calibration 200 of the embodiment of the present invention is when depending on applying for a patent model
Depending on enclosing.
In one embodiment, image processor 130 further calculates the pixel difference region of the depth map 600 shown in Fig. 1
The pixel value difference of 610, and relatively above-mentioned pixel value difference and pre-set threshold value, if pixel value difference is more than pre-set threshold value, to 3D image
The second imagery zone 710 in 700 is corrected.Above-mentioned pixel value difference is the GTG in the pixel difference region 610 of depth map 600
Difference, above-mentioned grey decision-making is the numerical value of the GTG degree representing depth map 600, typically represents with 0~255,0 represent black
Color, 255 represent white.For example, during image processor 130 calculates pixel difference region 610, the pixel in vertical direction is poor
Value, such as the pixel value difference in Y-axis, the computing formula after concluding is as follows:
Δ D=| and D (x, y)-D (x, y ± 1) | ... formula 2
As above formula 2, Δ D is pixel value difference, D (x, y) on the basis of the pixel value of point, and on the basis of D (x, y ± 1), point is toward Y
Axle up or down moves the pixel value of a unit.After pixel value difference Δ D in obtaining pixel difference region 610, pixel is poor
Value Δ D compares with pre-set threshold value, if pixel value difference Δ D is more than pre-set threshold value, represents the image in pixel difference region 610
Margin of image element away from excessive, thus there is a need to the second imagery zone 710 in 3D image 700 is corrected.At the present embodiment
In, depending on above-mentioned pre-set threshold value can be according to actual demand.Additionally, the present invention is not limited with above-described embodiment, it is only in order to example
The property shown ground one of the implementation that the present invention is described.
In another embodiment, the image processor 130 of the embodiment of the present invention can use above-mentioned formula 1 and formula 2 in the lump,
With the pixel value difference Δ D in pixel difference region 610, more than pre-set threshold value Th1, (representative needs the second shadow in 3D image 700
As region 710 is corrected), and the pixel value difference Δ I in the first imagery zone 510 (represents the first shadow less than pre-set threshold value Th2
Only comprise single object as in region 510) situation under, the second imagery zone 710 in 3D image 700 is corrected.As
This, by the most accurate for the correction result making the 3-dimensional image correcting unit 100 of the embodiment of the present invention.
Fig. 4 illustrates the schematic diagram of a kind of 3-dimensional image correcting unit 100a according to another embodiment of the present invention.Compared to
3-dimensional image correcting unit 100 shown in Fig. 1, the image processor 130 of 3-dimensional image correcting unit 100a in this more enters one
Step comprises an edge detecting device 132 in order to pixel difference region 610 to carry out rim detection to obtain marginal area 620.Additionally,
Image processor 130 also in order to according to marginal area 620 with correspondingly obtain input image 500 in the first imagery zone 520,
And the second imagery zone 720 in 3D image 700 is correspondingly obtained according to marginal area 620.The purpose of above-mentioned processing mode is detailed
State as rear.
If performing above-mentioned steps according to the pixel difference region 610 shown in Fig. 1, due to relatively marginal zone, pixel difference region 610
Territory 620 is big, thus image processor 130 need to process the imagery zone of larger area.So after above-mentioned processing mode, at image
Reason device 130 more can be absorbed in pixel difference region 610 " really can produce the marginal area 620 of zigzag phenomenon ", and enters it
Row processes.Compared to the poor region of above-mentioned pixel 610, this marginal area 620 scope is less, and marginal area 620 deals with speed
Comparatively fast, when promoting the treatment effeciency of 3-dimensional image correcting unit 100a further, and make result the most accurate.
3-dimensional image bearing calibration 200 as above all can be performed by software, hardware and/or firmware.For example,
If to perform speed and accuracy for primarily considering, the most substantially can be selected for hardware and/or firmware be main;If with design flexibility being
Primarily considering, it is main for the most substantially can be selected for software;Or, software, hardware and firmware work compound can be used simultaneously.Should be appreciated that
Arriving, these examples provided above are not so-called, and which is better and which is worse point, also and be not used to limit the present invention, is familiar with this skill
Person was when depending on needing elastic design at that time.
Furthermore, art has usually intellectual when it can be appreciated that each step in 3-dimensional image bearing calibration 200
Rapid named according to its function performed, only for allowing the technology of this case become apparent from understandable, be not limited to these steps.
Each step it is integrated into same step or is split into multiple step, or changing to either step another step is held
OK, the embodiment of present invention is all still fallen within.
From the invention described above embodiment, the application present invention has following advantages.The embodiment of the present invention is by proposition
A kind of 3-dimensional image correcting unit and 3-dimensional image bearing calibration, use 3-dimensional image correcting unit and 3-dimensional image bearing calibration
Mechanism, it is possible to the 3D image that according to the depth map and producing of mistake, degree of depth drawing image device is had crenellated phenomena carries out school
Just, accordingly ensure that the quality of 3D display.
Furthermore, 3-dimensional image correcting unit and 3-dimensional image bearing calibration more can be by judging the first imagery zone
In the most only comprise an object, and find out real problematic region and it be corrected, in order to maintain the sharp of 3D image
Under the situation of profit degree, the problem solving the crenellated phenomena of 3D image.
Although disclosing the specific embodiment of the present invention above in embodiment, so it is not limited to the present invention, this
Technical field that the present invention belongs to has usually intellectual, when not departing from the principle of the present invention with spirit, when can be to it
Carrying out various change and modification, therefore protection scope of the present invention is when being as the criterion with the defined person of subsidiary claim.
Claims (14)
1. a 3-dimensional image correcting unit, it is characterised in that comprise:
One degree of depth generator, in order to receive an input image, and processes to produce a depth map to this input image;
One degree of depth drawing image device, in order to according to this input image and this depth map to produce a 3-dimensional image;And
One image processor, in order to detect a pixel difference region of this depth map, wherein this pixel difference region is that pixel value difference is full
The region of related pixel of foot condition, this image processor according to this pixel difference region correspondingly to detect in this input image
One first imagery zone, and correspondingly detect one second imagery zone in this 3-dimensional image according to this pixel difference region, wherein
This image processor the most only comprises single object in this first imagery zone in order to judging;
One adjustment of image device, when being used to only comprise single object in this first imagery zone, to this in this 3-dimensional image
Two imagery zones are corrected.
3-dimensional image correcting unit the most according to claim 1, it is characterised in that this image processor is also in order to calculate this
One first pixel value difference in first imagery zone, and compare this first pixel value difference and one first pre-set threshold value, and in this
When one pixel value difference is less than this first pre-set threshold value, it is determined that this first imagery zone only comprises single object.
3-dimensional image correcting unit the most according to claim 2, it is characterised in that this first pixel value difference is this first shadow
As the luminance difference in region or a chroma difference.
3-dimensional image correcting unit the most according to claim 2, it is characterised in that this image processor is also in order to calculate this
One second pixel value difference in pixel difference region, and compare this second pixel value difference and one second pre-set threshold value, wherein this image
Corrector is also used in this first imagery zone only to comprise single object and this second pixel value difference and second presets valve more than this
During value, this second imagery zone in this 3-dimensional image is corrected.
3-dimensional image correcting unit the most according to claim 4, it is characterised in that this second pixel value difference is that this pixel is poor
A GTG difference in region.
3-dimensional image correcting unit the most according to claim 1, it is characterised in that this adjustment of image device is in order to this three-dimensional
This second imagery zone in image carries out smoothing techniques.
3-dimensional image correcting unit the most according to claim 1, it is characterised in that this image processor also comprises an edge
Detection device in order to this pixel difference region is carried out rim detection to obtain a marginal area, wherein this image processor also in order to
According to this marginal area correspondingly to obtain this first imagery zone in this input image, and according to this marginal area correspondingly
Obtain this second imagery zone in this 3-dimensional image.
8. a 3-dimensional image bearing calibration, it is characterised in that comprise:
Receive an input image, and process to produce a depth map to this input image;
According to this input image and this depth map to produce a 3-dimensional image;
Detecting a pixel difference region of this depth map, this pixel difference region is the district that pixel value difference meets the related pixel of condition
Territory;
According to this pixel difference region correspondingly to obtain one first imagery zone in this input image, and according to this pixel difference district
Territory correspondingly obtains one second imagery zone in this 3-dimensional image;
Judge this first imagery zone the most only comprises single object;And
When only comprising single object in this first imagery zone, this second imagery zone in this 3-dimensional image is carried out school
Just.
3-dimensional image bearing calibration the most according to claim 8, it is characterised in that whether judge in this first imagery zone
Only comprise the step of single object, comprise:
Calculate one first pixel value difference in this first imagery zone;
Relatively this first pixel value difference and one first pre-set threshold value;And
When this first pixel value difference is less than this first pre-set threshold value, it is determined that this first imagery zone only comprises single object.
3-dimensional image bearing calibration the most according to claim 9, it is characterised in that this first pixel value difference be this first
A luminance difference in imagery zone or a chroma difference.
11. 3-dimensional image bearing calibrations according to claim 9, it is characterised in that also comprise:
Calculate one second pixel value difference in this pixel difference region;And
Relatively this second pixel value difference and one second pre-set threshold value;
When wherein only comprising single object in this first imagery zone, this second imagery zone in this 3-dimensional image is carried out
The step of correction comprises:
When only comprising single object and this second pixel difference in this first imagery zone more than this second pre-set threshold value, to this three
This second imagery zone in dimension image is corrected.
12. 3-dimensional image bearing calibrations according to claim 11, it is characterised in that this second pixel value difference is this pixel
A GTG difference in difference region.
13. 3-dimensional image bearing calibrations according to claim 8, it is characterised in that in this 3-dimensional image this second
The step that imagery zone is corrected, comprises:
This second imagery zone in this 3-dimensional image is carried out smoothing techniques.
14. 3-dimensional image bearing calibrations according to claim 8, it is characterised in that this pixel detecting this depth map is poor
The step in region comprises:
This pixel difference region is carried out rim detection to obtain a marginal area;
Wherein according to this pixel difference region correspondingly to obtain this first imagery zone in this input image, and according to this pixel
Difference region correspondingly obtains the step of this second imagery zone in this 3-dimensional image and comprises:
According to this marginal area correspondingly to obtain this first imagery zone in this input image, and according to this marginal area phase
This second imagery zone in this 3-dimensional image should be obtained in ground.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW103115546 | 2014-04-30 | ||
TW103115546A TWI511079B (en) | 2014-04-30 | 2014-04-30 | Three-dimension image calibration device and method for calibrating three-dimension image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104052985A CN104052985A (en) | 2014-09-17 |
CN104052985B true CN104052985B (en) | 2016-08-10 |
Family
ID=51505304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410312022.1A Expired - Fee Related CN104052985B (en) | 2014-04-30 | 2014-07-02 | Three-dimensional image correction device and three-dimensional image correction method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104052985B (en) |
TW (1) | TWI511079B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI618031B (en) * | 2017-04-12 | 2018-03-11 | 和碩聯合科技股份有限公司 | Edge detection method of image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008112762A1 (en) * | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems amd methods for depth peeling using stereoscopic variables during the rendering of 2-d to 3-d images |
CN101771893A (en) * | 2010-01-05 | 2010-07-07 | 浙江大学 | Video frequency sequence background modeling based virtual viewpoint rendering method |
CN102811357A (en) * | 2011-06-03 | 2012-12-05 | 奇景光电股份有限公司 | Three-dimensional image processing system and method |
CN102883175A (en) * | 2012-10-23 | 2013-01-16 | 青岛海信信芯科技有限公司 | Methods for extracting depth map, judging video scene change and optimizing edge of depth map |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101908230B (en) * | 2010-07-23 | 2011-11-23 | 东南大学 | Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method |
US20120274626A1 (en) * | 2011-04-29 | 2012-11-01 | Himax Media Solutions, Inc. | Stereoscopic Image Generating Apparatus and Method |
TW201249172A (en) * | 2011-05-30 | 2012-12-01 | Himax Media Solutions Inc | Stereo image correction system and method |
TWI504232B (en) * | 2011-06-22 | 2015-10-11 | Realtek Semiconductor Corp | Apparatus for rendering 3d images |
KR101854188B1 (en) * | 2011-10-25 | 2018-05-08 | 삼성전자주식회사 | 3D image acquisition apparatus and method of acqiring depth information in the 3D image acquisition apparatus |
-
2014
- 2014-04-30 TW TW103115546A patent/TWI511079B/en not_active IP Right Cessation
- 2014-07-02 CN CN201410312022.1A patent/CN104052985B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008112762A1 (en) * | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems amd methods for depth peeling using stereoscopic variables during the rendering of 2-d to 3-d images |
CN101771893A (en) * | 2010-01-05 | 2010-07-07 | 浙江大学 | Video frequency sequence background modeling based virtual viewpoint rendering method |
CN102811357A (en) * | 2011-06-03 | 2012-12-05 | 奇景光电股份有限公司 | Three-dimensional image processing system and method |
CN102883175A (en) * | 2012-10-23 | 2013-01-16 | 青岛海信信芯科技有限公司 | Methods for extracting depth map, judging video scene change and optimizing edge of depth map |
Also Published As
Publication number | Publication date |
---|---|
TW201541406A (en) | 2015-11-01 |
CN104052985A (en) | 2014-09-17 |
TWI511079B (en) | 2015-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103824318B (en) | A kind of depth perception method of multi-cam array | |
EP2704097A2 (en) | Depth estimation device, depth estimation method, depth estimation program, image processing device, image processing method, and image processing program | |
US10334231B2 (en) | Display method and system for converting two-dimensional image into multi-viewpoint image | |
CN101556700B (en) | Method for drawing virtual view image | |
CN102892021B (en) | New method for synthesizing virtual viewpoint image | |
US8805020B2 (en) | Apparatus and method for generating depth signal | |
WO2011033673A1 (en) | Image processing apparatus | |
CN103839258A (en) | Depth perception method of binarized laser speckle images | |
CN111047709B (en) | Binocular vision naked eye 3D image generation method | |
CN104378619B (en) | A kind of hole-filling algorithm rapidly and efficiently based on front and back's scape gradient transition | |
CN105657401B (en) | A kind of bore hole 3D display methods, system and bore hole 3D display device | |
CN103024421A (en) | Method for synthesizing virtual viewpoints in free viewpoint television | |
CN102368826A (en) | Real time adaptive generation method from double-viewpoint video to multi-viewpoint video | |
CN103942756B (en) | A kind of method of depth map post processing and filtering | |
Hsia | Improved depth image-based rendering using an adaptive compensation method on an autostereoscopic 3-D display for a Kinect sensor | |
CN104270624A (en) | Region-partitioning 3D video mapping method | |
JP5862635B2 (en) | Image processing apparatus, three-dimensional data generation method, and program | |
CN104052985B (en) | Three-dimensional image correction device and three-dimensional image correction method | |
CN103945206B (en) | A kind of stereo-picture synthesis system compared based on similar frame | |
CN105530505B (en) | 3-D view conversion method and device | |
CN108924434A (en) | A kind of three-dimensional high dynamic-range image synthesis method based on exposure transformation | |
CN112200852B (en) | Stereo matching method and system for space-time hybrid modulation | |
CN109509237B (en) | Filter processing method and device and electronic equipment | |
CN104125446A (en) | Depth image optimization processing method and device in the 2D-to-3D conversion of video image | |
CN103888749B (en) | A kind of method of the many visual frequencies of binocular video conversion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160810 Termination date: 20200702 |