CN108470324A - A kind of binocular stereo image joining method of robust - Google Patents
A kind of binocular stereo image joining method of robust Download PDFInfo
- Publication number
- CN108470324A CN108470324A CN201810236089.XA CN201810236089A CN108470324A CN 108470324 A CN108470324 A CN 108470324A CN 201810236089 A CN201810236089 A CN 201810236089A CN 108470324 A CN108470324 A CN 108470324A
- Authority
- CN
- China
- Prior art keywords
- width
- view
- characteristic point
- right view
- left view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Abstract
The invention discloses a kind of binocular stereo image joining methods of robust, including:Two groups of images are acquired using binocular camera, calculate separately the disparity map between the left and right view of every group of image;The characteristic point of every group of image is extracted, and the characteristic point is described;The characteristic point of every group of image is subjected to GMS Feature Points Matchings, screens the matching to make mistake to obtain accurate characteristic point to set;New feature constraint condition is set to set according to view difference and characteristic point, obtain the homography conversion for keeping the feature constraint condition optimal, global change is carried out using second group of image of the homography conversion pair, to keeping transformation using local shape relative to the Non-overlapping Domain of first group of image in second group of image;Left view and right view after fusion transformation respectively, obtains spliced left view and right view, then synthesized to obtain final stereogram.The present invention not only may be implemented it is seamless spliced, but also algorithm have certain robustness.
Description
Technical field
The present invention relates to computer vision technique and image processing field more particularly to a kind of binocular stereo images of robust
Joining method.
Background technology
Image mosaic technology is broadcast live etc. in medical treatment, aerospace, amusement and all plays important work using very extensive
With enriching people’s lives;Especially as the development of VR and AR, people are no longer content with the figures of existing camera perspective shooting
Picture, pursues the higher resolution even high quality graphic of 360 degree of panoramas, this gives traditional monocular image splicing to bring challenges.
Image is transformed to using the perspective transform between image under unified coordinate system and is merged by traditional monocular image splicing,
It is a kind of two dimensional surface transformation.And real Scene has a depth information, only two-dimensional transform can not stitching image well,
Can not being registrated between image leads to problems such as splicing result fuzzy even ghost occur.
With the rise of binocular camera and stereo-picture, people begin one's study stereo-picture splicing.For stereogram
For splicing, not only need to obtain the spliced map of stereoscopic effect, additionally it is possible to it brings and is experienced referring to one comfortable 3D of person, because
This needs to reduce projection distortion, distortion and vertical parallax as far as possible in splicing.In addition to this, stitching algorithm is wanted to fit
Various scenes are answered, especially remain able to be spliced when the characteristic point rareness of extraction, this just needs algorithm to have certain Shandong
Stick.
The disclosure of background above technology contents is only used for design and the technical solution that auxiliary understands the present invention, not necessarily
The prior art for belonging to present patent application, no tangible proof show the above present patent application the applying date
In the case of disclosed, above-mentioned background technology should not be taken to the novelty and creativeness of evaluation the application.
Invention content
It, not only can be in order to solve the above technical problems, the present invention proposes a kind of binocular stereo image joining method of robust
Realize it is seamless spliced, and algorithm have certain robustness.
In order to achieve the above object, the present invention uses following technical scheme:
The invention discloses a kind of binocular stereo image joining methods of robust, include the following steps:
S1:Two groups of images are acquired using binocular camera, wherein first group of image includes that the first width left view and the first width are right
View, second group of image includes the second width left view and the second width right view, and calculates separately the first width left view and the first width
View difference between right view and the view between the second width left view and the second width right view are poor;
S2:The characteristic point of every group of image is extracted, and the characteristic point is described;
S3:The characteristic point of every group of image in step S2 is subjected to GMS Feature Points Matchings, screens the matching to make mistake to obtain
Accurate characteristic point is to set;
S4:According in step S1 view difference and step S3 in matched characteristic point new feature constraint is set to set
Condition obtains the homography conversion for keeping the feature constraint condition optimal, is carried out using second group of image of the homography conversion pair
Global change, and to keeping transformation using local shape relative to the Non-overlapping Domain of first group of image in second group of image;
S5:The second width left view and the first width right view after merging the first width left view respectively and converting and transformation
The second width right view afterwards obtains spliced left view and right view, then is closed to spliced left view and right view
At obtaining final stereogram.
Preferably, step S2 is specifically included:3000~8000 characteristic points of every group of image are extracted using ORB algorithms, profit
Characteristic point is described with BRIEF algorithms, generates description of multidimensional.
Preferably, step S3 is specifically included:According to the characteristic point of the every group of image extracted in step S2, violence is used first
Matched mode carries out Feature Points Matching to two width left views and two width right views respectively, then uses the method screening error of GMS
Matching accidentally, and as the input of RANSAC, accurate characteristic point is obtained to set.
Preferably, it is specifically included in step S4:
S41:The characteristic point screened using step S3, according to the disparity map in step S1, designs new spy to set
Levy constraints Ef, iterate to calculate out optimal homography conversion Hg, according to homography conversion HgRespectively by the second width left view
Under the coordinate system for transforming to the first width left view and the first width right view with the second width right view;
S42:Transformation H is kept using the shape in half projective transformationsTo the non-of the second width left view and the second width right view
Overlapping region carries out shape holding;
S43:To the second width left view and the second width right view progress grid optimization after transformation, vertical parallax and water are limited
Head-up is poor.
Preferably, step S41 is specifically included:
Using the characteristic point of step S3 screenings to set WithAccording to the disparity map D between the first width left view and the first width right view in step S11And the second width left view
View difference D between figure and the second width right view2, design new feature constraint condition Ef:
Ef=γlEl+γrEr+El_r
Wherein ElFor left view constraints, ErFor right view constraints, El_rFor left and right view constraints;γlWith
γrIt is binary number, when splicing two width left views, γlValue is 1, is otherwise 0, when splicing two width right views, γrValue is 1,
Otherwise it is 0;
Optimal homography conversion H is iterated to calculate out by these constraintssg, according to homography conversion HgRespectively will
Second width left view and the second width right view transform under the coordinate system of the first width left view and the first width right view.
Preferably, wherein left view constraints El, right view constraints ErIt is as follows respectively:
Left view constraints ElAs follows:
In formula, n1It is characteristic point to setThe number of middle characteristic point, n2It is characteristic point to set In
The number of characteristic point, HlIt is the homography matrix of left view in iterative process;wmAnd wkWeighted value, respectively with current signature point
The Gauss distance of all characteristic points is related on to corresponding image;
Right view constraints ErExpression formula:
In formula, n3It is characteristic point to setThe number of middle characteristic point, n4It is characteristic point to set
The number of middle characteristic point, HrIt is the homography matrix of right view in iterative process;wiAnd wjWeighted value, respectively with current signature
The Gauss distance of point to all characteristic points on corresponding image is related;
Preferably, wherein:
Indicate the weighted value of m-th of characteristic point in the second width left view;Indicate the weighted value of k-th of characteristic point in the second width left view;
Indicate the weighted value of ith feature point in the second width right view;Indicate the weighted value of j-th of characteristic point in the second width right view.
Preferably, left and right view constraints El_rExpression formula is as follows:
In formula, n5It is characteristic point to setThe number of middle characteristic point, Hl_rIt is left or right view in iterative process
Homography matrix;wsIt is weighted value, it is related with the Gauss distance of all characteristic points on current signature point to corresponding image.
Preferably, wherein:Indicate the power of s-th of characteristic point in the second width left view
Weight values.
Preferably, step S43 is specifically included:Respectively to after transformation the second width left view and the second width right view carry out net
Lattice optimize, and limit vertical parallax and horizontal parallax so that corresponding left view gross energy EL and right view gross energy ERIt is minimum.
Compared with prior art, the beneficial effects of the present invention are:Binocular stereo image joining method according to the present invention,
Seamless spliced, reduction ghost image not only may be implemented, but also the matching characteristic point for capableing of robust filters out error characteristic point pair so that
Follow-up splicing is more accurate.
Description of the drawings
Fig. 1 is the flow diagram of the binocular stereo image joining method of the robust of the preferred embodiment of the present invention.
Specific implementation mode
Below against attached drawing and in conjunction with preferred embodiment, the invention will be further described.
As shown in Figure 1, the preferred embodiment of the present invention proposes a kind of binocular stereo image joining method of robust, including it is following
Step:
S1:Two groups of images are acquired using binocular camera, wherein every group of image respectively includes the collected left view of left camera
Figure and the collected right view of right camera, and calculate separately the disparity map between the left and right view of every group of image;
Specifically, collected two groups of images are denoted as I1And I2, wherein first group of image I1It is collected including left camera
The first width left viewWith the collected first width right view of right cameraSecond group of image I2It is acquired including left camera
The the second width left view arrivedWith the collected second width right view of right cameraAnd it calculates separately left and right in every group of image and regards
The disparity map of figure, wherein the first width left view of first group of imageWith the first width right viewBetween view difference be denoted as D1,
Second width left view of second group of imageWith the second width right viewBetween view difference be denoted as D2。
S2:The characteristic point of every group of image is extracted using feature extraction algorithm (such as SIFT, SURF, ORB), and to characteristic point
It is described;
Specifically, it uses ORB algorithms to extract 3000-8000 characteristic point of every group of image in the present embodiment, utilizes BRIEF
Algorithm and to characteristic point into descriptions such as line directions, generate description of 128 dimensions.
S3:The characteristic point of every group of image in step S2 is subjected to GMS Feature Points Matchings, filters out erroneous matching to obtain Shandong
Stick and accurate characteristic point are to set;
The specific steps are:According to the characteristic point of the every group of image extracted in step S2, the matched mode of violence is used first
Feature Points Matching, the Ratio before then being replaced with the method for GMS are carried out to two width left views and two width right views respectively
Test screens the matching that makes mistake so that some matched characteristic point to the characteristic point of surrounding to being all correctly to match, it is right
The matching of this feature point pair plays the role of positive;It, can Fast Convergent and as the input of RANSAC;Finally obtain Shandong
Stick and accurate characteristic point are to set WithWhereinBe the characteristic point of the first width left view and the second width left view to set,For the first width right view and
The characteristic point of two width right views to set,Be the characteristic point of the second width left view and the first width right view to set,Be the characteristic point of the second width right view and the first width left view to set,For the second width left view and
The characteristic point of two width right views is to set.
S4:According in step S1 disparity map and step S3 in matched characteristic point new feature constraint is set to set
Condition Ef, obtain the homography conversion H for keeping feature constraint condition optimalg, and carried out using second group of image of the homography conversion pair
Global change, and to keeping transformation H using local shape relative to the Non-overlapping Domain of first group of image in second group of images,
Then grid optimization is utilized to correct corresponding distortion;
Step S4 is specifically included:
S41:Using the characteristic point of step S3 screenings to set
WithAccording to the disparity map D in step S11And D2, design new feature constraint condition Ef:
Ef=γlEl+γrEr+El_r
Wherein γlAnd γrIt is binary number, when splicing two width left views, γlValue is 1, is otherwise 0, same to splice
When two width right views, γrValue is 1, is otherwise 0;Feature constraint condition is made of three parts, left view constraints El, right view
Constraints ErAnd left and right view constraints El_r;Left view constraints is as follows:
In formula, n1It is characteristic point to setThe number of middle characteristic point, n2It is characteristic point to set In
The number of characteristic point, HlIt is the homography matrix of left view in iterative process;wmAnd wkIt is weighted value, and according to current signature point
The Gauss distance of all characteristic points is related on to the image;
Wherein,Indicate the weighted value of m-th of characteristic point in the second width left view;Indicate the weighted value of k-th of characteristic point in the second width left view.
Right view constraints E can similarly be obtainedrExpression formula:
In formula, n3It is characteristic point to setThe number of middle characteristic point, n4It is characteristic point to set
The number of middle characteristic point, HrIt is the homography matrix of right view in iterative process;wiAnd wjIt is weighted value, and according to current signature
The Gauss distance of point to all characteristic points on the image is related;
Wherein,Indicate the weighted value of ith feature point in the second width right view;Indicate the weighted value of j-th of characteristic point in the second width right view.
Left and right view constraints El_rExpression formula is as follows:
In formula, n5It is characteristic point to setThe number of middle characteristic point, Hl_rIt is left or right view in iterative process
Homography matrix;wsIt is weighted value, it is related to according to the Gauss distance of all characteristic points on current signature point to the image;Indicate the weighted value of s-th of characteristic point in the second width left view;
Optimal homography conversion H is iterated to calculate out by these constraintssg, according to HgRespectively by the second width left view
Under the coordinate system for transforming to the first width left view and the first width right view with the second width right view;
S42:In the part (Non-overlapping Domain) of the second width left view and the second width right view, using in half projective transformation
Shape keeps transformation HsShape holding is carried out to the part of two width figures, improves local distortion;
S43:Respectively to the second width left view after transformationWith the second width right viewGrid optimization is carried out, limitation is vertical
Parallax and horizontal parallax so that corresponding left view gross energy ELWith right view gross energy ERIt is minimum:
Left view total energy quantifier ELExpression formula it is as follows:
EL=α Egl+βEsl+Eyl+Edl
In formula, EglRepresent the global registration item of left view, EslThe shape for representing left view retains item, EylRepresent left view
Vertical parallax limit entry, EdlThe horizontal parallax limit entry of left view is represented, α, β are weight terms, and it is 0~1 that α, β, which distinguish value,;
The global registration item E of left viewglCharacteristic point after the second width left view transformation of specific expression and reference chart (the first width
Left view) in the position of characteristic point should be as consistent as possible, be expressed as follows:
In formula,Represent m-th of characteristic point after the second width left view transformation;
The shape of left view retains item EslEmbody it is as follows:
In formula,It is three vertex after grid cell transformation, ω respectivelyiThe conspicuousness of expression grid, u=0,Wherein vi、vj、vkIt is three vertex before grid cell transformation respectively,
The vertical parallax limit entry E of left viewylIndicate character pair point in the second width left view and the second width right view
Ordinate should be as close possible to embodying as follows:
In formula,Indicate the y-coordinate of the second width left view after converting,Indicate that the y of the second width right view after converting is sat
Mark;
The horizontal parallax limit entry E of left viewdlIndicate the spy in the second width left view and the second width right view after transformation
The difference and the difference of the second width left view before transformation and the abscissa of the characteristic point in the second width right view for levying the abscissa of point are answered
As close possible to embodying as follows:
In formula,Indicate the x coordinate of the second width left view after converting,Indicate that the x of the second width right view after converting is sat
Mark,Indicate the x coordinate of the preceding second width left view of transformation,Indicate the x coordinate of the preceding second width right view of transformation.
Right view total energy quantifier E can similarly be derivedRIt is as follows:
ER=α Egr+βEsr+Eyr+Edr
In formula, EgrRepresent the global registration item of right view, EsrThe shape for representing right view retains item, EyrRepresent right view
Vertical parallax limit entry, EdrThe horizontal parallax limit entry of right view is represented, α, β are weight terms, and it is 0~1 that α, β, which distinguish value,;
The global registration item E of right viewglCharacteristic point after specific expression the second width right view transformation and reference chart (the first width
Right view) in the position of characteristic point should be as consistent as possible, be expressed as follows:
In formula,Represent m-th of characteristic point after the transformation of the second width right view;
The shape of right view retains item EsrEmbody it is as follows:
In formula,It is three vertex after grid cell transformation, ω respectivelyiThe conspicuousness of expression grid, u=0,Wherein vi、vj、vkIt is three vertex before grid cell transformation respectively,
The vertical parallax limit entry E of right viewyrIndicate character pair point in the second width left view and the second width right view
Ordinate should be as close possible to embodying as follows:
In formula,Indicate the y-coordinate of the second width left view after converting,Indicate that the y of the second width right view after converting is sat
Mark;
The horizontal parallax limit entry E of right viewdrIndicate the spy in the second width left view and the second width right view after transformation
The difference and the difference of the second width left view before transformation and the abscissa of the characteristic point in the second width right view for levying the abscissa of point are answered
As close possible to embodying as follows:
In formula,Indicate the x coordinate of the second width left view after converting,Indicate that the x of the second width right view after converting is sat
Mark,Indicate the x coordinate of the preceding second width left view of transformation,Indicate the x coordinate of the preceding second width right view of transformation.
S5:Merge respectively the second width left view after being converted in the first width left view and step S4, the first width right view with
The second width right view after being converted in step S4, obtains spliced left and right view, then synthesize to left and right spliced map, obtains
Final stereogram.
Nothing may be implemented using half projective transformation in Non-overlapping Domain in binocular stereo image joining method according to the present invention
Seam splicing reduces ghost image;The matching characteristic point that GMS algorithms are capable of robust instead of tradition Ratio Test filters out error characteristic point
It is right, using the splicing of disparity map auxiliary so that follow-up splicing is more accurate.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that
The specific implementation of the present invention is confined to these explanations.For those skilled in the art to which the present invention belongs, it is not taking off
Under the premise of from present inventive concept, several equivalent substitute or obvious modifications can also be made, and performance or use is identical, all answered
When being considered as belonging to protection scope of the present invention.
Claims (10)
1. a kind of binocular stereo image joining method of robust, which is characterized in that include the following steps:
S1:Two groups of images are acquired using binocular camera, wherein first group of image includes the first width left view and the first width right view,
Second group of image includes the second width left view and the second width right view, and calculates separately the first width left view and the first width right view
Between view difference and view between the second width left view and the second width right view it is poor;
S2:The characteristic point of every group of image is extracted, and the characteristic point is described;
S3:The characteristic point of every group of image in step S2 is subjected to GMS Feature Points Matchings, it is accurate to obtain to screen the matching to make mistake
Characteristic point to set;
S4:According in step S1 view difference and step S3 in matched characteristic point new feature constraint item is set to set
Part obtains the homography conversion for keeping the feature constraint condition optimal, is carried out using second group of image of the homography conversion pair complete
Office's transformation, and to keeping transformation using local shape relative to the Non-overlapping Domain of first group of image in second group of image;
S5:After the second width left view and the first width right view and transformation after merging the first width left view respectively and converting
Second width right view obtains spliced left view and right view, then is synthesized to spliced left view and right view, obtains
To final stereogram.
2. binocular stereo image joining method according to claim 1, which is characterized in that step S2 is specifically included:Using
ORB algorithms extract 3000~8000 characteristic points of every group of image, and characteristic point is described using BRIEF algorithms, generate more
Description of dimension.
3. binocular stereo image joining method according to claim 1, which is characterized in that step S3 is specifically included:According to
The characteristic point of the every group of image extracted in step S2 uses the matched mode of violence right to two width left views and two width respectively first
View carries out Feature Points Matching, the matching to make mistake is then screened with the method for GMS, and as the input of RANSAC, obtain standard
True characteristic point is to set.
4. binocular stereo image joining method according to claim 1, which is characterized in that specifically included in step S4:
S41:The characteristic point screened using step S3, according to the disparity map in step S1, designs new feature about to set
Beam condition Ef, iterate to calculate out optimal homography conversion Hg, according to homography conversion HgRespectively by the second width left view and
Two width right views transform under the coordinate system of the first width left view and the first width right view;
S42:Transformation H is kept using the shape in half projective transformationsTo the non-overlapping area of the second width left view and the second width right view
Domain carries out shape holding;
S43:To the second width left view and the second width right view progress grid optimization after transformation, limits vertical parallax and regarded with level
Difference.
5. binocular stereo image joining method according to claim 4, which is characterized in that step S41 is specifically included:
Using the characteristic point of step S3 screenings to set WithAccording to the disparity map D between the first width left view and the first width right view in step S11And the second width left view
View difference D between figure and the second width right view2, design new feature constraint condition Ef:
Ef=γlEl+γrEr+El_r
Wherein ElFor left view constraints, ErFor right view constraints, El_rFor left and right view constraints;γlAnd γrIt is
Binary number, when splicing two width left views, γlValue is 1, is otherwise 0, when splicing two width right views, γrValue is 1, otherwise
It is 0;
Optimal homography conversion H is iterated to calculate out by these constraintssg, according to homography conversion HgRespectively by second
Width left view and the second width right view transform under the coordinate system of the first width left view and the first width right view.
6. binocular stereo image joining method according to claim 5, which is characterized in that wherein left view constraints El、
Right view constraints ErIt is as follows respectively:
Left view constraints ElAs follows:
In formula, n1It is characteristic point to setThe number of middle characteristic point, n2It is characteristic point to set Middle characteristic point
Number, HlIt is the homography matrix of left view in iterative process;wmAnd wkIt is weighted value, respectively to current signature point to corresponding
Image on all characteristic points Gauss distance it is related;
Right view constraints ErExpression formula:
In formula, n3It is characteristic point to setThe number of middle characteristic point, n4It is characteristic point to setMiddle spy
Levy the number of point, HrIt is the homography matrix of right view in iterative process;wiAnd wjIt is weighted value, is arrived respectively with current signature point
The Gauss distance of all characteristic points is related on corresponding image.
7. binocular stereo image joining method according to claim 6, which is characterized in that wherein:
Indicate the weighted value of m-th of characteristic point in the second width left view;Indicate the weighted value of k-th of characteristic point in the second width left view;
Indicate the weighted value of ith feature point in the second width right view;Indicate the weighted value of j-th of characteristic point in the second width right view.
8. binocular stereo image joining method according to claim 5, which is characterized in that
Left and right view constraints El_rExpression formula is as follows:
In formula, n5It is characteristic point to setThe number of middle characteristic point, Hl_rIt is the list of left or right view in iterative process
Answering property matrix;wsIt is weighted value, it is related with the Gauss distance of all characteristic points on current signature point to corresponding image.
9. binocular stereo image joining method according to claim 8, which is characterized in that wherein:
Indicate the weighted value of s-th of characteristic point in the second width left view.
10. binocular stereo image joining method according to claim 4, which is characterized in that step S43 is specifically included:Point
Other the second width left view to after transformation and the second width right view carry out grid optimization, limit vertical parallax and horizontal parallax, make
Obtain corresponding left view gross energy ELWith right view gross energy ERIt is minimum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810236089.XA CN108470324B (en) | 2018-03-21 | 2018-03-21 | Robust binocular stereo image splicing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810236089.XA CN108470324B (en) | 2018-03-21 | 2018-03-21 | Robust binocular stereo image splicing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108470324A true CN108470324A (en) | 2018-08-31 |
CN108470324B CN108470324B (en) | 2022-02-25 |
Family
ID=63265751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810236089.XA Active CN108470324B (en) | 2018-03-21 | 2018-03-21 | Robust binocular stereo image splicing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108470324B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109493282A (en) * | 2018-11-21 | 2019-03-19 | 清华大学深圳研究生院 | A kind of stereo-picture joining method for eliminating movement ghost image |
CN110111255A (en) * | 2019-04-24 | 2019-08-09 | 天津大学 | A kind of stereo-picture joining method |
CN110120013A (en) * | 2019-05-15 | 2019-08-13 | 深圳市凌云视迅科技有限责任公司 | A kind of cloud method and device |
CN110211043A (en) * | 2019-05-11 | 2019-09-06 | 复旦大学 | A kind of method for registering based on grid optimization for Panorama Mosaic |
CN110458875A (en) * | 2019-07-30 | 2019-11-15 | 广州市百果园信息技术有限公司 | Detection method, image split-joint method, related device and the equipment of abnormal point pair |
CN110866868A (en) * | 2019-10-25 | 2020-03-06 | 江苏荣策士科技发展有限公司 | Splicing method of binocular stereo images |
CN112068300A (en) * | 2020-11-10 | 2020-12-11 | 中国科学院自动化研究所 | Shutter type self-adaptive 3D display system based on medical microscopic image |
WO2021120407A1 (en) * | 2019-12-17 | 2021-06-24 | 大连理工大学 | Parallax image stitching and visualization method based on multiple pairs of binocular cameras |
CN115953780A (en) * | 2023-03-10 | 2023-04-11 | 清华大学 | Multi-dimensional light field complex scene graph construction method based on multi-view information fusion |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240612A1 (en) * | 2007-03-30 | 2008-10-02 | Intel Corporation | Non-overlap region based automatic global alignment for ring camera image mosaic |
CN104290319A (en) * | 2008-03-07 | 2015-01-21 | Imra美国公司 | Transparent material processing with ultrashort pulse laser |
US20150054913A1 (en) * | 2013-08-21 | 2015-02-26 | Jaunt Inc. | Image stitching |
CN105389787A (en) * | 2015-09-30 | 2016-03-09 | 华为技术有限公司 | Panorama image stitching method and device |
CN105678687A (en) * | 2015-12-29 | 2016-06-15 | 天津大学 | Stereo image stitching method based on content of images |
WO2016165016A1 (en) * | 2015-04-14 | 2016-10-20 | Magor Communications Corporation | View synthesis-panorama |
CN107067370A (en) * | 2017-04-12 | 2017-08-18 | 长沙全度影像科技有限公司 | A kind of image split-joint method based on distortion of the mesh |
CN107767339A (en) * | 2017-10-12 | 2018-03-06 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method |
-
2018
- 2018-03-21 CN CN201810236089.XA patent/CN108470324B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240612A1 (en) * | 2007-03-30 | 2008-10-02 | Intel Corporation | Non-overlap region based automatic global alignment for ring camera image mosaic |
CN104290319A (en) * | 2008-03-07 | 2015-01-21 | Imra美国公司 | Transparent material processing with ultrashort pulse laser |
US20150054913A1 (en) * | 2013-08-21 | 2015-02-26 | Jaunt Inc. | Image stitching |
WO2016165016A1 (en) * | 2015-04-14 | 2016-10-20 | Magor Communications Corporation | View synthesis-panorama |
CN105389787A (en) * | 2015-09-30 | 2016-03-09 | 华为技术有限公司 | Panorama image stitching method and device |
CN105678687A (en) * | 2015-12-29 | 2016-06-15 | 天津大学 | Stereo image stitching method based on content of images |
CN107067370A (en) * | 2017-04-12 | 2017-08-18 | 长沙全度影像科技有限公司 | A kind of image split-joint method based on distortion of the mesh |
CN107767339A (en) * | 2017-10-12 | 2018-03-06 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method |
Non-Patent Citations (2)
Title |
---|
LU WANG等: "image splice left right homograpgy overlap,Fused multi-sensor information image stitching", 《INTELLIGENT SCIENCE AND INTELLIGENT DATA ENGINEERING》 * |
储珺: "结合图像特征的多视拼接数据的消冗处理", 《计算机应用研究》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109493282A (en) * | 2018-11-21 | 2019-03-19 | 清华大学深圳研究生院 | A kind of stereo-picture joining method for eliminating movement ghost image |
CN110111255B (en) * | 2019-04-24 | 2023-02-28 | 天津大学 | Stereo image splicing method |
CN110111255A (en) * | 2019-04-24 | 2019-08-09 | 天津大学 | A kind of stereo-picture joining method |
CN110211043A (en) * | 2019-05-11 | 2019-09-06 | 复旦大学 | A kind of method for registering based on grid optimization for Panorama Mosaic |
CN110211043B (en) * | 2019-05-11 | 2023-06-27 | 复旦大学 | Registration method based on grid optimization for panoramic image stitching |
CN110120013A (en) * | 2019-05-15 | 2019-08-13 | 深圳市凌云视迅科技有限责任公司 | A kind of cloud method and device |
CN110120013B (en) * | 2019-05-15 | 2023-10-20 | 深圳市凌云视迅科技有限责任公司 | Point cloud splicing method and device |
CN110458875A (en) * | 2019-07-30 | 2019-11-15 | 广州市百果园信息技术有限公司 | Detection method, image split-joint method, related device and the equipment of abnormal point pair |
CN110458875B (en) * | 2019-07-30 | 2021-06-15 | 广州市百果园信息技术有限公司 | Abnormal point pair detection method, image splicing method, corresponding device and equipment |
CN110866868A (en) * | 2019-10-25 | 2020-03-06 | 江苏荣策士科技发展有限公司 | Splicing method of binocular stereo images |
WO2021120407A1 (en) * | 2019-12-17 | 2021-06-24 | 大连理工大学 | Parallax image stitching and visualization method based on multiple pairs of binocular cameras |
US11350073B2 (en) | 2019-12-17 | 2022-05-31 | Dalian University Of Technology | Disparity image stitching and visualization method based on multiple pairs of binocular cameras |
US11175490B1 (en) | 2020-11-10 | 2021-11-16 | Institute Of Automation, Chinese Academy Of Sciences | Shutter-type adaptive three-dimensional display system based on medical microscopic imaging |
CN112068300B (en) * | 2020-11-10 | 2021-03-16 | 中国科学院自动化研究所 | Shutter type self-adaptive 3D display system based on medical microscopic image |
CN112068300A (en) * | 2020-11-10 | 2020-12-11 | 中国科学院自动化研究所 | Shutter type self-adaptive 3D display system based on medical microscopic image |
CN115953780A (en) * | 2023-03-10 | 2023-04-11 | 清华大学 | Multi-dimensional light field complex scene graph construction method based on multi-view information fusion |
Also Published As
Publication number | Publication date |
---|---|
CN108470324B (en) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108470324A (en) | A kind of binocular stereo image joining method of robust | |
CN107767339B (en) | Binocular stereo image splicing method | |
CN110211043A (en) | A kind of method for registering based on grid optimization for Panorama Mosaic | |
EP3321881A1 (en) | Novel view synthesis using deep convolutional neural networks | |
Rematas et al. | Image-based synthesis and re-synthesis of viewpoints guided by 3d models | |
CN108898665A (en) | Three-dimensional facial reconstruction method, device, equipment and computer readable storage medium | |
CN105678687A (en) | Stereo image stitching method based on content of images | |
CN112734890B (en) | Face replacement method and device based on three-dimensional reconstruction | |
Yan et al. | Stereoscopic image stitching based on a hybrid warping model | |
CN112085659A (en) | Panorama splicing and fusing method and system based on dome camera and storage medium | |
Li et al. | A unified framework for street-view panorama stitching | |
CN109493282A (en) | A kind of stereo-picture joining method for eliminating movement ghost image | |
Li et al. | Uphdr-gan: Generative adversarial network for high dynamic range imaging with unpaired data | |
CN109472752A (en) | More exposure emerging systems based on Aerial Images | |
CN112862683A (en) | Adjacent image splicing method based on elastic registration and grid optimization | |
CN108898550A (en) | Image split-joint method based on the fitting of space triangular dough sheet | |
CN108616746A (en) | The method that 2D panoramic pictures based on deep learning turn 3D panoramic pictures | |
Zhou et al. | Single-view view synthesis with self-rectified pseudo-stereo | |
CN111047513A (en) | Robust image alignment method and device for cylindrical panoramic stitching | |
CN110111255A (en) | A kind of stereo-picture joining method | |
Fu et al. | Image Stitching Techniques Applied to Plane or 3D Models: A Review | |
CN108830804A (en) | Virtual reality fusion Fuzzy Consistent processing method based on line spread function standard deviation | |
CN114066733A (en) | Unmanned aerial vehicle image splicing method based on image convolution | |
Chen et al. | Time-of-Day Neural Style Transfer for Architectural Photographs | |
Song et al. | Image Data Fusion Algorithm Based on Virtual Reality Technology and Nuke Software and Its Application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |