CN107767339A - A kind of binocular stereo image joining method - Google Patents
A kind of binocular stereo image joining method Download PDFInfo
- Publication number
- CN107767339A CN107767339A CN201710948182.9A CN201710948182A CN107767339A CN 107767339 A CN107767339 A CN 107767339A CN 201710948182 A CN201710948182 A CN 201710948182A CN 107767339 A CN107767339 A CN 107767339A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- eye pattern
- mover
- spliced
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Abstract
The invention discloses a kind of binocular stereo image joining method, including:S1:Multiple series of images is gathered using binocular camera, feature is extracted and screening obtains set of characteristic points, wherein the first eye pattern and the second eye pattern of the camera collection of second direction that the camera that every group of image includes first direction gathers;S2:The first eye pattern is converted and spliced according to the step S1 set of characteristic points screened, obtains converting spliced first eye pattern;S3:According to the multiple series of images collected, target disparity map is calculated;S4:Line translation is entered to the second eye pattern according to spliced first eye pattern of conversion and the target disparity map and spliced, obtains converting spliced second eye pattern;S5:It will be converted in step S2 and convert spliced second eye pattern in spliced first eye pattern and step S4 and synthesized, obtain final stereogram.Binocular stereo image joining method proposed by the present invention, seamless spliced, reduction ghost image can be not only realized, and be not limited to the putting position and angle of camera.
Description
Technical field
The present invention relates to computer vision technique and image processing field, more particularly to a kind of binocular stereo image splicing side
Method.
Background technology
With VR and AR rise, requirement more and more higher of the people to the resolution ratio of image, visual angle and quality, due to single
Camera perspective is limited in scope, and this just needs to obtain the wide viewing angle even image of 360 degree of panoramas using image mosaic technology;Figure at present
As splicing is one of key in VR and AR, the application of image mosaic is also more and more extensive, including medical treatment, education, physical culture,
Aero-Space etc..
All pictures are spliced into the figure of wide viewing angle or panorama after plurality of pictures is shot with single or multiple cameras, but
When being spliced into wide viewing angle or panoramic pictures with single photo, because the depth of camera to picture scene is unknown so that splicing knot
There are fuzzy, ghost and non-registration problems in fruit.With the rise of stereo-picture and video, increasing people begins one's study solid
Image mosaic technology, the splicing of existing stereo-picture generally there are problems that following difficult point and:First, the processing of parallax, parallax is deposited
Cause splicing that ghost occurs, influence final visual effect;Second, to the splicing of the stereo-picture arbitrarily gathered, current one
The stitching image algorithm of a little high quality is usually required that according to certain rule to put or rotating camera, such as needs to consolidate in camera
It is fixed to be shot and spliced into a circle.
The disclosure of background above technology contents is only used for design and the technical scheme that auxiliary understands the present invention, and it is not necessarily
Belong to the prior art of present patent application, no tangible proof show the above present patent application the applying date
In the case of disclosed, above-mentioned background technology should not be taken to evaluate the novelty and creativeness of the application.
The content of the invention
In order to solve the above technical problems, the present invention proposes a kind of binocular stereo image joining method, nothing can be not only realized
Seam splicing, ghost image is reduced, and be not limited to the putting position and angle of camera.
In order to achieve the above object, the present invention uses following technical scheme:
The invention discloses a kind of binocular stereo image joining method, comprise the following steps:
S1:Multiple series of images is gathered using binocular camera, feature is extracted and screening obtains set of characteristic points, wherein every group of image
First eye pattern of the camera collection including first direction and the second eye pattern of the camera collection of second direction;
S2:The first eye pattern is converted and spliced according to the step S1 set of characteristic points screened, after obtaining conversion splicing
The first eye pattern;
S3:According to the multiple series of images collected, target disparity map is calculated;
S4:Line translation is entered to the second eye pattern according to spliced first eye pattern of conversion and the target disparity map and spliced,
Obtain converting spliced second eye pattern;
S5:It will be converted in step S2 and convert spliced second eye pattern in spliced first eye pattern and step S4 and closed
Into obtaining final stereogram.
In further scheme, step S2 is specifically included:
S21:The set of characteristic points screened using step S1, iterative calculation obtain the homography matrix of the first eye pattern, then
The first eye pattern that needs convert is converted using first the first eye pattern with reference to line translation is entered according to the homography matrix
First eye pattern;
S22:After the first eye pattern progress mesh transformations of conversion, then spliced, obtain converting spliced First view
Figure.
In further scheme, step S4 is specifically included:According to the target disparity map, parallax and depth are utilized
Relation and camera focus, are calculated depth information, and the set of characteristic points of the second eye pattern is clustered using depth information;
Again by the target view, the grid vertex for converting spliced first eye pattern is mapped in the second eye pattern, obtains first
The coordinate of grid vertex in eye pattern, the second all eye patterns is then subjected to mesh transformations, and spliced, obtain conversion splicing
The second eye pattern afterwards.
Compared with prior art, the beneficial effects of the present invention are:According to the present invention binocular stereo image joining method,
Seamless spliced, reduction ghost image can be not only realized, and is not limited to the putting position and angle of camera, realizes random collection
Stereo-picture splicing.
In further scheme, the homography matrix that was optimized by increasing parallax energy term, parallax is carried out
Processing so that image is more naturally continuously;And by introducing the energy term of vertical parallax so that in use, only need
The overlapping region that shoot has 30% between the image come can be spliced, the putting position without fixed camera again
And angle, it is more convenient to use.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the binocular stereo image joining method of the preferred embodiment of the present invention.
Embodiment
Below against accompanying drawing and with reference to preferred embodiment, the invention will be further described.
As shown in figure 1, the preferred embodiment of the present invention proposes a kind of binocular stereo image joining method, comprise the following steps:
S1:Multiple series of images is gathered using binocular camera, feature is extracted and screening obtains set of characteristic points;
Specifically, the image collected is designated as Ii, wherein i=1,2...n, n >=2, every group of stereogram IiIncluding first party
To camera shoot the first eye pattern II, lThe the second eye pattern I shot with the camera of second directionI, r, utilize feature extraction algorithm
(such as SIFT, SURF algorithm) extraction needs the first eye pattern and first the first eye pattern (I convertedJ, l, I1, l) and need to convert
The first eye pattern and corresponding second eye pattern (IJ, l, IJ, r) characteristic point, and the feature of extraction is clicked through using RANSAC algorithms
Row screening, the set of characteristic points (F finally matchedJ, l, F1, l) and (FJ, l, FJ, r), wherein j=2,3...n, n >=2.
S2:The first eye pattern is converted and spliced according to the step S1 set of characteristic points screened, after obtaining conversion splicing
The first eye pattern;Specifically include following steps:
S21:Using the set of characteristic points (F being calculated in step S1J, l, F1, l) and (FJ, l, FJ, r), iterative calculation first
The homography matrix H of eye patternL, cause the first energy term E in an iterative processfThere is minimum value;Then should according to the list being calculated
Property matrix HLThe first eye pattern I that needs are convertedJ, lWith first the first eye pattern I1, lFor with reference to enter that line translation converted the
One eye patternWherein the first energy term EfExpression formula be:
In formula, n1It is set of characteristic points (FJ, l, F1, l) in characteristic point number, n2It is set of characteristic points (FJ, l, FJ, r) in it is special
The number of point is levied, H is the homography matrix of the first eye pattern in iterative process;wmAnd wkIt is weighted value, and according to current signature point
The Gauss distance of all characteristic points is related on to the image;
Wherein,Represent the weight of m-th of characteristic point in the eye pattern of jth first
Value;Represent the weighted value of k-th of characteristic point in the eye pattern of jth first;
The y-coordinate of k-th of characteristic point after the eye pattern conversion of expression jth first,After representing the conversion of the eye pattern of jth second
The y-coordinate of k-th of characteristic point.
S22:To the first eye pattern of conversionCarry out mesh transformations so that the first total energy quantifier ELIt is minimum;Based on seam method
The splicing line of overlapping region is found, the first eye pattern after mesh transformations is spliced, obtains the first final eye pattern IL。
Wherein first total energy quantifier ELExpression formula it is as follows:
EL=α Egl+βEsl+Eyl+Edl
In formula, EglRepresent the global registration item of the first eye pattern, EslThe shape for representing the first eye pattern retains item, EylRepresent
The vertical parallax limit entry of one eye pattern, EdlThe horizontal parallax limit entry of the first eye pattern is represented, α, β are weight terms, and α, β distinguish value
For 0~1;
Wherein, the global registration item E of the first eye patternglRepresent characteristic point and corresponding reference chart after conversion (first the
One eye pattern) in characteristic point position it is as consistent as possible, embody as follows:
In formula,Represent m-th of characteristic point after the conversion of the eye pattern of jth first.
The shape of first eye pattern retains item EslEmbody it is as follows:
In formula,Be respectively grid cell conversion after three summits, ωiThe conspicuousness of grid is represented,Wherein vi、vj、vkIt is three summits before grid cell conversion respectively,
The vertical parallax limit entry E of first eye patternylRepresent the ordinate of character pair point in the first eye pattern and the second eye pattern
Should be as close possible to embodying as follows:
In formula,The y-coordinate of the eye pattern of jth first after converting is represented,Represent the eye pattern of jth second after converting
Y-coordinate;
The horizontal parallax limit entry E of first eye patterndlRepresent the first eye pattern after conversion and characteristic point in the second eye pattern
The difference of the first eye pattern before the difference of abscissa and conversion and the abscissa of the characteristic point in the second eye pattern should be as close possible to specifically
It is expressed as follows:
In formula,The x coordinate of the eye pattern of jth first after converting is represented,Represent the eye pattern of jth second after converting
X coordinate, FJ, l, xRepresent the x coordinate of the eye pattern of jth first before converting, FJ, r, xRepresent the x coordinate of the eye pattern of jth second before converting.
S3:Obtain target disparity map:To every group of stereogram (II, l, II, r) carry out it is down-sampled, utilize optical flow method estimation light stream
The correlation density of vector, then amplify light stream vector by yardstick, obtain disparity map Di, splice and obtain target disparity map D.
S4:Line translation is entered to the second eye pattern according to spliced first eye pattern of conversion and target disparity map and spliced, is obtained
Convert spliced second eye pattern;Comprise the following steps that:
According to the target disparity map D obtained in step S3, using parallax and the relation and camera focus of depth, calculate
To depth information, the set of characteristic points of the second eye pattern is clustered using depth information and (is more than or equal to two classes), in overlay region
Domain obtains corresponding transformation matrix according to classification, in Non-overlapping Domain global homography matrix;By target disparity map, will spell
The the first eye pattern I connectedLGrid vertex be mapped in the second eye pattern, obtain the coordinate of grid vertex in the first eye pattern, then
The second all eye patterns is subjected to mesh transformations, makes the second total energy quantifier ERMinimum, it is then based on seam method and finds overlapping region
Splicing line, spliced, obtain the second final eye pattern IR。
Wherein second total energy quantifier ERExpression formula it is as follows:
ER=Egr+Esr+Eyr+Edr
In formula, EgrRepresent the global registration item of the second eye pattern, EsrThe shape for representing the second eye pattern retains item, EyrRepresent
The vertical parallax limit entry of two eye patterns, EdrRepresent the horizontal parallax limit entry of the second eye pattern;
Wherein, the global registration item E of the second eye patterngrRepresent the second eye pattern grid vertex conversion before and after coordinate as far as possible one
Cause, embody as follows:
In formula,Represent the coordinate after grid vertex conversion, viRepresent the coordinate before grid vertex conversion;
The shape of second eye pattern retains item EsrEmbody it is as follows:
In formula,Be respectively grid cell conversion after three summits, ωiRepresent the conspicuousness of grid, u=
0,Wherein vi、vj、vkBe respectively grid cell conversion before three summits, R=
The vertical parallax limit entry E of second eye patternyrRepresent the ordinate of character pair point in the first eye pattern and the second eye pattern
Should be as close possible to embodying as follows:
In formula,The y-coordinate of the eye pattern of jth first after converting is represented,Represent the eye pattern of jth second after converting
Y-coordinate;
The horizontal parallax limit entry E of second eye patterndrExpression formula it is as follows:
In formula,The x coordinate on the first eye pattern of jth summit after converting is represented,Represent jth second after converting
The x coordinate on figure summit, vJ, l, xRepresent the x coordinate on the first eye pattern of jth summit before converting, vJ, r, xRepresent jth second before converting
The x coordinate on eye pattern summit.
S5:Spliced first eye pattern I will be converted in step S2LWith spliced second eye pattern I is converted in step S4REnter
Row synthesis, forms final stereogram.
Above-mentioned the first eye pattern and the second eye pattern is respectively that the camera on the both sides of binocular camera shoots obtained image, i.e., left
The left eye figure of side camera shooting and the right eye figure of the right camera shooting;Namely first eye pattern be left eye figure and the second eye pattern is right eye
Figure, or the first eye pattern is right eye figure and the second eye pattern is left eye figure.
It is following that the binocular stereo image joining method of the preferred embodiment of the present invention is carried out furtherly with specific example
It is bright.
A1:Two groups of images are gathered using binocular camera, feature is extracted and screens, the image collected is designated as I1And I2, often
Group stereogram includes the left eye figure I of left side camera shooting1, l、I2, lWith the right eye figure I of the right camera shooting1, r、I2, r, utilize feature
Extraction algorithm (such as SIFT, SURF algorithm) extraction needs the left eye figure and the first width left eye figure (I converted2, l, I1, l) and need
The left eye figure of conversion and corresponding right eye figure (I2, l, I2, r) characteristic point, and the feature of extraction is clicked through using RANSAC algorithms
Row screening, the set of characteristic points (F finally matched2, l, F1, l) and (F2, l, F2, r)。
A2:Conversion left eye figure simultaneously splices, and parallax is considered, using the set of characteristic points (F being calculated in step A12, l,
F1, l) and (F2, l, F2, r), the homography matrix H of iterative calculation left eye figureLSo that parallax energy term E in iterative processfThere is minimum
Value, then according to the homography matrix H being calculatedLTo remaining left eye figure using the first width left eye figure be with reference to enter line translation obtain
The left eye figure of conversionParallax energy term EfExpression formula be:
In formula, n1It is set of characteristic points (F2, l, F1, l) in characteristic point number, n2It is set of characteristic points (F2, l, F2, r) in it is special
The number of point is levied, H is the homography matrix of the left eye figure in iterative process;wmAnd wkIt is weighted value, with being arrived according to current signature point
The Gauss distance of all characteristic points is related on the image;
Wherein,Represent the weight of m-th of characteristic point in the second width left eye figure
Value;Represent the weighted value of k-th of characteristic point in the second width left eye figure;Table
Show the second width left eye figure conversion after k-th of characteristic point y-coordinate,Represent the second width right eye figure conversion after kth
The y-coordinate of individual characteristic point.
To the left eye figure of obtained conversionCarry out mesh transformations so that the first total energy quantifier ELIt is minimum;It is then based on connecing
Stitch finds the splicing line of overlapping region, is spliced, and obtains final left eye figure IL;First total energy quantifier ELExpression formula such as
Under:
EL=α Egl+βEsl+Eyl+Edl
In formula, EglRepresent the global registration item of left eye figure, EslThe shape for representing left eye figure retains item, EylRepresent left eye figure
Vertical parallax limit entry, EdlThe horizontal parallax limit entry of left eye figure is represented, α, β are weight terms, and α, β difference value are 0~1,
In some instances, α=0.7, β=0.4;
The global registration item E of left eye figureglEmbody as follows:
In formula,Set of characteristic points after representation transformation, representing the characteristic point after conversion, (the first width is left with reference chart
Eye pattern) in the position of characteristic point should be as consistent as possible;
The shape of left eye figure retains item EslEmbody it is as follows:
In formula,Be respectively grid cell conversion after three summits, ωiRepresent the conspicuousness of grid, u=
0,Wherein vi、vj、vkIt is three summits before grid cell conversion respectively,
The vertical parallax limit entry E of left eye figureylEmbody as follows:
Eyl=| | F2, l, y-F2, r, y||2
In formula,The y-coordinate of the second width left eye figure after converting is represented,Represent the y of the second width right eye figure after converting
Coordinate;
The horizontal parallax limit entry E of left eye figuredlEmbody as follows:
In formula,The x coordinate of the second width left eye figure after converting is represented,Represent the x of the second width right eye figure after converting
Coordinate, F2, l, xRepresent the x coordinate of the second width left eye figure before converting, F2, r, xRepresent the x coordinate of the second width right eye figure before converting.
A3:Disparity map is obtained, to every group of stereogram (I1, l, I1, r) and (I2, l, I2, r) carry out it is down-sampled, estimated using optical flow method
The correlation density of photometric flow vector, then amplify light stream vector by yardstick, obtain disparity map Di, splice and obtain target disparity map D;
A4:Conversion right eye figure simultaneously splices, according to the target disparity map D obtained in A3, using the relation of parallax and depth with
And camera focus, depth information is calculated, the set of characteristic points of right eye figure is clustered using depth information and (is more than or equal to
Two classes), corresponding transformation matrix is obtained according to classification in overlapping region, in Non-overlapping Domain global homography matrix;Pass through
Target disparity map, the left eye figure I that will have splicedLGrid vertex be mapped in right eye figure, obtain grid vertex in left eye figure
Coordinate, all right eye figures are then subjected to mesh transformations, make the second total energy quantifier ERMinimum, it is then based on seam method and finds weight
The splicing line in folded region, is spliced, obtains final right eye figure IR, the second total energy quantifier ERExpression formula is as follows:
ER=Egr+Esr+Eyr+Edr
In formula, EgrRepresent the global registration item of right eye figure, EsrThe shape for representing right eye figure retains item, EyrRepresent right eye figure
Vertical parallax limit entry, EdrRepresent the horizontal parallax limit entry of right eye figure;
In formula, the global registration item E of right eye figuregrIt is defined as follows, coordinate is use up before and after need to causing the conversion of right eye figure grid vertex
May be consistent:
In formula,Represent the coordinate after grid vertex conversion, viRepresent the coordinate before grid vertex conversion;
The horizontal parallax item E of right eye figuredrExpression formula is as follows:
In formula,The x coordinate on the second width left eye figure summit after converting is represented,Represent the second width right eye figure after converting
The x coordinate on summit, vJ, l, xRepresent the x coordinate on the second width left eye figure summit before converting, vJ, r, xRepresent the second width right eye figure before converting
The x coordinate on summit.
Remaining two Esr、EyrRespectively with E in left eye figuresl、EylWhat is defined is consistent.
A5:The left eye figure that step A2 has spliced is synthesized into a stereogram with the right eye figure that step A4 has spliced.
Splice to obtain stereogram by above-mentioned joining method, pass through parallax energy term pair when iterating to calculate homography matrix
Characteristic point is limited to optimize, and during image mosaic, has taken into full account that vertical parallax and level regard by definition
The energy term of difference optimizes to grid so that the image of splicing can be realized seamless spliced, and reduce ghost image, and be regardless of
Mud is in the putting position and angle of camera.
Above content is to combine specific preferred embodiment further description made for the present invention, it is impossible to is assert
The specific implementation of the present invention is confined to these explanations.For those skilled in the art, do not taking off
On the premise of from present inventive concept, some equivalent substitutes or obvious modification can also be made, and performance or purposes are identical, all should
When being considered as belonging to protection scope of the present invention.
Claims (10)
1. a kind of binocular stereo image joining method, it is characterised in that comprise the following steps:
S1:Multiple series of images is gathered using binocular camera, feature is extracted and screening obtains set of characteristic points, wherein every group of image includes
First eye pattern of the camera collection of first direction and the second eye pattern of the camera collection of second direction;
S2:The first eye pattern is converted and spliced according to the step S1 set of characteristic points screened, obtains converting spliced the
One eye pattern;
S3:According to the multiple series of images collected, target disparity map is calculated;
S4:Line translation is entered to the second eye pattern according to spliced first eye pattern of conversion and the target disparity map and spliced, is obtained
Convert spliced second eye pattern;
S5:It will be converted in step S2 and convert spliced second eye pattern in spliced first eye pattern and step S4 and synthesized,
Obtain final stereogram.
2. binocular stereo image joining method according to claim 1, it is characterised in that step S2 is specifically included:
S21:The set of characteristic points screened using step S1, iterative calculation obtain the homography matrix of the first eye pattern, then basis
To the first eye pattern for converting of needs, using first the first eye pattern to refer to, line translation is converted the homography matrix first
Eye pattern;
S22:After the first eye pattern progress mesh transformations of conversion, then spliced, obtain converting spliced first eye pattern.
3. binocular stereo image joining method according to claim 1, it is characterised in that step S4 is specifically included:According to
The target disparity map, using parallax and the relation and camera focus of depth, depth information is calculated, utilizes depth information
The set of characteristic points of second eye pattern is clustered;Again by the target view, the net of spliced first eye pattern will be converted
Lattice summit is mapped in the second eye pattern, obtains the coordinate of grid vertex in the first eye pattern, then carries out the second all eye patterns
Mesh transformations, and spliced, obtain converting spliced second eye pattern.
4. binocular stereo image joining method according to claim 1, it is characterised in that step S1 is specially:Using double
Mesh camera collection multiple series of images is designated as Ii, wherein i=1,2...n, n >=2, every group of image include the first eye pattern II, lWith second
Scheme II, r, the first eye pattern and first the first eye pattern (I that need to convert using feature extraction algorithm extractionJ, l, I1, l) and need to become
The first eye pattern changed and corresponding second eye pattern (IJ, l, IJ, r) characteristic point, and using RANSAC algorithms to the characteristic point of extraction
Screened, the set of characteristic points (F finally matchedJ, l, F1, l) and (FJ, l, FJ, r), wherein j=2,3...n, n >=2.
5. binocular stereo image joining method according to claim 4, it is characterised in that step S2 is specifically included:
S21:Set of characteristic points (the F screened using step S1J, l, F1, l) and (FJ, l, FJ, r), iterative calculation obtains the first eye pattern
II, lHomography matrix HL, then according to homography matrix HLThe first eye pattern I that needs are convertedJ, lWith the first width First view
Scheme I1, lTo refer to the first eye pattern that line translation is converted
S22:To the first eye pattern of conversionMesh transformations are carried out, then are spliced, obtain converting spliced first eye pattern IL。
6. binocular stereo image joining method according to claim 5, it is characterised in that iterated to calculate in step S21
Obtain the first eye pattern II, lHomography matrix HLIterative process in cause the first energy term EfThere is minimum value;Wherein, the first energy
Quantifier EfExpression formula be:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>E</mi>
<mi>f</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>n</mi>
<mn>1</mn>
</msub>
</mfrac>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>m</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mn>1</mn>
</msub>
</munderover>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>w</mi>
<mi>m</mi>
</msub>
</mfrac>
<msub>
<mi>HF</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>F</mi>
<mrow>
<mi>j</mi>
<mo>-</mo>
<mn>1</mn>
<mo>,</mo>
<mi>l</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>n</mi>
<mn>2</mn>
</msub>
</mfrac>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mn>2</mn>
</msub>
</munderover>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mfrac>
<mn>1</mn>
<msub>
<mi>w</mi>
<mi>k</mi>
</msub>
</mfrac>
<msub>
<mi>HF</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mi>y</mi>
</msub>
<mo>-</mo>
<msub>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mfrac>
<mn>1</mn>
<msub>
<mi>w</mi>
<mi>k</mi>
</msub>
</mfrac>
<msub>
<mi>HF</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>r</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mi>y</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
In formula, n1It is set of characteristic points (FJ, l, F1, l) in characteristic point number, n2It is set of characteristic points (FJ, l, FJ, r) in characteristic point
Number, H is the homography matrix of the first eye pattern in iterative process;wmAnd wkIt is weighted value;Represent jth
The y-coordinate of k-th of characteristic point after the conversion of the first eye pattern,K-th of spy after the eye pattern conversion of expression jth second
Levy the y-coordinate of point;
Preferably,Represent the weighted value of m-th of characteristic point in the eye pattern of jth first;Represent the weighted value of k-th of characteristic point in the eye pattern of jth first.
7. binocular stereo image joining method according to claim 5, it is characterised in that to conversion in step S22
First eye patternCarry out causing the first total energy quantifier E during grid changeLIt is minimum;Wherein, the first total energy quantifier ELTable
It is as follows up to formula:
EL=α Egl+βEsl+Eyl+Edl
In formula, EglRepresent the global registration item of the first eye pattern, EslThe shape for representing the first eye pattern retains item, EylRepresent First view
The vertical parallax limit entry of figure, EdlThe horizontal parallax limit entry of the first eye pattern is represented, α, β are weight terms, and α, β difference value are 0
~1.
8. binocular stereo image joining method according to claim 7, it is characterised in that:
The global registration item E of first eye patternglExpression formula it is as follows:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>g</mi>
<mi>l</mi>
</mrow>
</msub>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>m</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>n</mi>
<mn>1</mn>
</msub>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>F</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>F</mi>
<mrow>
<mn>1</mn>
<mo>,</mo>
<mi>l</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
In formula,Represent m-th of characteristic point after the conversion of the eye pattern of jth first;
The shape of first eye pattern retains item EslEmbody it is as follows:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>s</mi>
<mi>l</mi>
</mrow>
</msub>
<mo>=</mo>
<munder>
<mi>&Sigma;</mi>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>i</mi>
</msub>
</munder>
<msub>
<mi>&omega;</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>i</mi>
</msub>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>j</mi>
</msub>
<mo>+</mo>
<mi>u</mi>
<mo>(</mo>
<mrow>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>k</mi>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
<mo>+</mo>
<mi>v</mi>
<mi>R</mi>
<mo>(</mo>
<mrow>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>k</mi>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
In formula,Be respectively grid cell conversion after three summits, ωiThe conspicuousness of expression grid, u=0,Wherein vi、vj、vkIt is three summits before grid cell conversion respectively,
The vertical parallax limit entry E of first eye patternylExpression it is as follows:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>y</mi>
<mi>l</mi>
</mrow>
</msub>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>F</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>F</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>r</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
In formula,The y-coordinate of the eye pattern of jth first after converting is represented,Represent that the y of the eye pattern of jth second after converting is sat
Mark;
The horizontal parallax limit entry E of first eye patterndlExpression it is as follows:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>d</mi>
<mi>l</mi>
</mrow>
</msub>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>F</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>F</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>r</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>F</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>F</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>r</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
In formula,The x coordinate of the eye pattern of jth first after converting is represented,Represent that the x of the eye pattern of jth second after converting is sat
Mark, FJ, l, xRepresent the x coordinate of the eye pattern of jth first before converting, FJ, r, xRepresent the x coordinate of the eye pattern of jth second before converting.
9. binocular stereo image joining method according to claim 5, it is characterised in that step S4 is specifically included:According to
The target disparity map, using parallax and the relation and camera focus of depth, depth information is calculated, utilizes depth information
The set of characteristic points of second eye pattern is clustered;Again by the target view, the net of spliced first eye pattern will be converted
Lattice summit is mapped in the second eye pattern, obtains the coordinate of grid vertex in the first eye pattern, then carries out the second all eye patterns
Mesh transformations, and spliced, obtain converting spliced second eye pattern;Preferably, net is being carried out to the second all eye patterns
Cause the second total energy quantifier E during case transformationRMinimum, wherein, the second total energy quantifier ERExpression formula it is as follows:
ER=Egr+Esr+Eyr+Edr
In formula, EgrRepresent the global registration item of the second eye pattern, EsrThe shape for representing the second eye pattern retains item, EyrRepresent second
The vertical parallax limit entry of figure, EdrRepresent the horizontal parallax limit entry of the second eye pattern.
10. binocular stereo image joining method according to claim 9, it is characterised in that:
The global registration item E of second eye patterngrExpression it is as follows:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>g</mi>
<mi>r</mi>
</mrow>
</msub>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mi>i</mi>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>v</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
In formula,Represent the coordinate after grid vertex conversion, viRepresent the coordinate before grid vertex conversion;
The shape of second eye pattern retains item EsrExpression it is as follows:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>s</mi>
<mi>r</mi>
</mrow>
</msub>
<mo>=</mo>
<munder>
<mi>&Sigma;</mi>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>i</mi>
</msub>
</munder>
<msub>
<mi>&omega;</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>i</mi>
</msub>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>j</mi>
</msub>
<mo>+</mo>
<mi>u</mi>
<mo>(</mo>
<mrow>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>k</mi>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
<mo>+</mo>
<mi>v</mi>
<mi>R</mi>
<mo>(</mo>
<mrow>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>k</mi>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
In formula,Be respectively grid cell conversion after three summits, ωiThe conspicuousness of expression grid, u=0,Wherein vi、vj、vkIt is three summits before grid cell conversion respectively,
The vertical parallax limit entry E of second eye patternyrExpression it is as follows:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>y</mi>
<mi>r</mi>
</mrow>
</msub>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>F</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>F</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>r</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
In formula,The y-coordinate of the eye pattern of jth first after converting is represented,Represent that the y of the eye pattern of jth second after converting is sat
Mark;
Horizontal parallax limit entry EdrExpression it is as follows:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>d</mi>
<mi>r</mi>
</mrow>
</msub>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>v</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>r</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>v</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>l</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>v</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>r</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
In formula,The x coordinate on the first eye pattern of jth summit after converting is represented,Represent the second eye pattern of jth top after converting
The x coordinate of point, vJ, l, xRepresent the x coordinate on the first eye pattern of jth summit before converting, vJ, r, xRepresent the eye pattern of jth second before converting
The x coordinate on summit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710948182.9A CN107767339B (en) | 2017-10-12 | 2017-10-12 | Binocular stereo image splicing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710948182.9A CN107767339B (en) | 2017-10-12 | 2017-10-12 | Binocular stereo image splicing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107767339A true CN107767339A (en) | 2018-03-06 |
CN107767339B CN107767339B (en) | 2021-02-02 |
Family
ID=61267165
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710948182.9A Active CN107767339B (en) | 2017-10-12 | 2017-10-12 | Binocular stereo image splicing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107767339B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470324A (en) * | 2018-03-21 | 2018-08-31 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method of robust |
CN109727194A (en) * | 2018-11-20 | 2019-05-07 | 广东智媒云图科技股份有限公司 | A kind of method, electronic equipment and storage medium obtaining pet noseprint |
CN110111255A (en) * | 2019-04-24 | 2019-08-09 | 天津大学 | A kind of stereo-picture joining method |
CN110264406A (en) * | 2019-05-07 | 2019-09-20 | 威盛电子股份有限公司 | The method of image processing apparatus and image procossing |
CN110458870A (en) * | 2019-07-05 | 2019-11-15 | 北京迈格威科技有限公司 | A kind of image registration, fusion, occlusion detection method, apparatus and electronic equipment |
CN110866868A (en) * | 2019-10-25 | 2020-03-06 | 江苏荣策士科技发展有限公司 | Splicing method of binocular stereo images |
CN111028155A (en) * | 2019-12-17 | 2020-04-17 | 大连理工大学 | Parallax image splicing method based on multiple pairs of binocular cameras |
WO2021120407A1 (en) * | 2019-12-17 | 2021-06-24 | 大连理工大学 | Parallax image stitching and visualization method based on multiple pairs of binocular cameras |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130208975A1 (en) * | 2012-02-13 | 2013-08-15 | Himax Technologies Limited | Stereo Matching Device and Method for Determining Concave Block and Convex Block |
CN103345736A (en) * | 2013-05-28 | 2013-10-09 | 天津大学 | Virtual viewpoint rendering method |
CN105389787A (en) * | 2015-09-30 | 2016-03-09 | 华为技术有限公司 | Panorama image stitching method and device |
CN105678687A (en) * | 2015-12-29 | 2016-06-15 | 天津大学 | Stereo image stitching method based on content of images |
CN106127690A (en) * | 2016-07-06 | 2016-11-16 | 李长春 | A kind of quick joining method of unmanned aerial vehicle remote sensing image |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
CN106910253A (en) * | 2017-02-22 | 2017-06-30 | 天津大学 | Stereo-picture cloning process based on different cameral spacing |
CN107240082A (en) * | 2017-06-23 | 2017-10-10 | 微鲸科技有限公司 | A kind of splicing line optimization method and equipment |
-
2017
- 2017-10-12 CN CN201710948182.9A patent/CN107767339B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130208975A1 (en) * | 2012-02-13 | 2013-08-15 | Himax Technologies Limited | Stereo Matching Device and Method for Determining Concave Block and Convex Block |
CN103345736A (en) * | 2013-05-28 | 2013-10-09 | 天津大学 | Virtual viewpoint rendering method |
CN105389787A (en) * | 2015-09-30 | 2016-03-09 | 华为技术有限公司 | Panorama image stitching method and device |
CN105678687A (en) * | 2015-12-29 | 2016-06-15 | 天津大学 | Stereo image stitching method based on content of images |
CN106127690A (en) * | 2016-07-06 | 2016-11-16 | 李长春 | A kind of quick joining method of unmanned aerial vehicle remote sensing image |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
CN106910253A (en) * | 2017-02-22 | 2017-06-30 | 天津大学 | Stereo-picture cloning process based on different cameral spacing |
CN107240082A (en) * | 2017-06-23 | 2017-10-10 | 微鲸科技有限公司 | A kind of splicing line optimization method and equipment |
Non-Patent Citations (3)
Title |
---|
WANG, XINGZHENG;TIAN, YUSHI;WANG, HAOQIAN;ZHANG, YONGBI: "Stereo Matching by Filtering-Based Disparity Propagation", 《PLOS ONE》 * |
李玉峰; 李广泽; 谷绍湖; 龙科慧: "基于区域分块与尺度不变特征变换的图像拼接算法", 《光学精密工程》 * |
王莹: "图像拼接中多单应性矩阵配准及错位消除算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470324A (en) * | 2018-03-21 | 2018-08-31 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method of robust |
CN108470324B (en) * | 2018-03-21 | 2022-02-25 | 深圳市未来媒体技术研究院 | Robust binocular stereo image splicing method |
CN109727194A (en) * | 2018-11-20 | 2019-05-07 | 广东智媒云图科技股份有限公司 | A kind of method, electronic equipment and storage medium obtaining pet noseprint |
CN109727194B (en) * | 2018-11-20 | 2023-08-04 | 广东智媒云图科技股份有限公司 | Method for obtaining nose patterns of pets, electronic equipment and storage medium |
CN110111255B (en) * | 2019-04-24 | 2023-02-28 | 天津大学 | Stereo image splicing method |
CN110111255A (en) * | 2019-04-24 | 2019-08-09 | 天津大学 | A kind of stereo-picture joining method |
CN110264406A (en) * | 2019-05-07 | 2019-09-20 | 威盛电子股份有限公司 | The method of image processing apparatus and image procossing |
CN110264406B (en) * | 2019-05-07 | 2023-04-07 | 威盛电子(深圳)有限公司 | Image processing apparatus and image processing method |
CN110458870A (en) * | 2019-07-05 | 2019-11-15 | 北京迈格威科技有限公司 | A kind of image registration, fusion, occlusion detection method, apparatus and electronic equipment |
CN110866868A (en) * | 2019-10-25 | 2020-03-06 | 江苏荣策士科技发展有限公司 | Splicing method of binocular stereo images |
CN111028155A (en) * | 2019-12-17 | 2020-04-17 | 大连理工大学 | Parallax image splicing method based on multiple pairs of binocular cameras |
US11350073B2 (en) | 2019-12-17 | 2022-05-31 | Dalian University Of Technology | Disparity image stitching and visualization method based on multiple pairs of binocular cameras |
WO2021120407A1 (en) * | 2019-12-17 | 2021-06-24 | 大连理工大学 | Parallax image stitching and visualization method based on multiple pairs of binocular cameras |
Also Published As
Publication number | Publication date |
---|---|
CN107767339B (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767339A (en) | A kind of binocular stereo image joining method | |
CN105245841B (en) | A kind of panoramic video monitoring system based on CUDA | |
CN108470324A (en) | A kind of binocular stereo image joining method of robust | |
CN110211043A (en) | A kind of method for registering based on grid optimization for Panorama Mosaic | |
CN109360171A (en) | A kind of real-time deblurring method of video image neural network based | |
CN101394573B (en) | Panoramagram generation method and system based on characteristic matching | |
CN106960414A (en) | A kind of method that various visual angles LDR image generates high-resolution HDR image | |
CN112085659B (en) | Panorama splicing and fusing method and system based on dome camera and storage medium | |
CN109801215A (en) | The infrared super-resolution imaging method of network is generated based on confrontation | |
CN104408701A (en) | Large-scale scene video image stitching method | |
CN112184604B (en) | Color image enhancement method based on image fusion | |
CN110717936B (en) | Image stitching method based on camera attitude estimation | |
CN104616247B (en) | A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT | |
CN108416754A (en) | A kind of more exposure image fusion methods automatically removing ghost | |
CN108093221A (en) | A kind of real-time video joining method based on suture | |
CN108038893A (en) | A kind of generation method of 1,000,000,000 pixel videos based on Hybrid camera array | |
CN109493282A (en) | A kind of stereo-picture joining method for eliminating movement ghost image | |
CN103177432A (en) | Method for obtaining panorama by using code aperture camera | |
CN112862683A (en) | Adjacent image splicing method based on elastic registration and grid optimization | |
CN108093188A (en) | A kind of method of the big visual field video panorama splicing based on hybrid projection transformation model | |
CN108876740B (en) | Multi-exposure registration method based on ghost removal | |
CN109754385A (en) | It is not registrated the rapid fusion method of multiple focussing image | |
CN107330856B (en) | Panoramic imaging method based on projective transformation and thin plate spline | |
CN107067368B (en) | Streetscape image splicing method and system based on deformation of image | |
CN111105350B (en) | Real-time video splicing method based on self homography transformation under large parallax scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |