CN105303615A - Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image - Google Patents
Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image Download PDFInfo
- Publication number
- CN105303615A CN105303615A CN201510752244.XA CN201510752244A CN105303615A CN 105303615 A CN105303615 A CN 105303615A CN 201510752244 A CN201510752244 A CN 201510752244A CN 105303615 A CN105303615 A CN 105303615A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- dimensional
- unique point
- dimensional surface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention discloses a combination method of the two-dimensional stitching and the three-dimensional surface reconstruction of an image. The combination method comprises the following steps: preprocessing an input image, and extracting feature points; carrying out feature point matching, and screening point sets which succeed in pairing to obtain a multiple-view geometrical corresponding relationship of the feature points; according to the corresponding relationship of the feature points, calculating a projection reconstruction relationship of a scene, and obtaining the relative position and the relative posture of a camera in a three-dimensional space; carrying out metric reconstruction and global optimization; according to the relative position relationship of the camera in the three-dimensional space, selecting a proper visual angle and observation plane, establishing the projection relationships of all images on the observation plane; and carrying out the image stitching and the three-dimensional surface reconstruction. The combination method fully considers the common points and the different points of the two-dimensional stitching and the three-dimensional surface reconstruction technology of the image, extracts common steps in the implementation processes of the two-dimensional stitching and the three-dimensional surface reconstruction, and can simultaneously obtain the panoramic image and the three-dimensional surface of the scene in relatively short time.
Description
Technical field
The invention belongs to computer vision and technical field of image processing, particularly relate to the combined method of the splicing of a kind of two-dimensional image and three-dimensional surface rebuilding.
Background technology
Along with the fast development of computer technology, computer vision and image processing techniques have been widely used in the various aspects of the modern life, are especially widely used in image mosaic with based in the three-dimensional modeling of image.Nowadays, we can see two-dimensional panoramic figure or the three-dimensional panorama figure at many sight spots on the webpages such as various map software supplier, tour site.In addition, panoramic photography technology is everlasting when bat is taken a group photo and is played an important role, and we can utilize panoramic photography technology the queue of annular to be positioned among the photograph of a strip.
The two-dimentional splicing of image and three-dimensional reconstruction development are rapidly.Image split-joint method based on fundamental matrix and homography matrix has developed rather ripe, and cv::Stitcher class integrated in OpenCV, when carrying out the splicing of some image, is enough to reach the effect of mixing the spurious with the genuine.In the three-dimensional reconstruction field of image, based drive modeling (StructurefromMotion, SfM) is a kind of extremely successful method.SfM estimates video camera relative position in space and attitude by the point correspondence found between image.It is very outstanding that experiment shows that SfM shows in the relative position of reduction video camera in space.Although image mosaic and three-dimensional surface rebuilding technology respectively put extraordinary splendor, still do not find the method for parallel processing of image mosaic and three-dimensional reconstruction at present.
In industrial detection or medical treatment detect, high power microscope camera or endoscope are usually used as visual apparatus and use.But because the size of camera lens itself is less or due to higher enlargement ratio, the field of view that such video camera obtains is very limited.When carrying out the detection on some surface, the image that usual disposable shooting is a large amount of.In order to alleviate operating load during human eye check image, adopt image mosaic and three-dimensional surface rebuilding and the large field-of-view image that obtains and the three-dimensional model with station-keeping ability by the graphical analysis that extremely contributes in Machine Vision Detection and location.Therefore, in this case, a kind of two dimension splicing of image and the combined method of three-dimensional surface rebuilding is needed urgently.
Summary of the invention
In order to solve the problem, a kind of two-dimensional image is the object of the present invention is to provide to splice the combined method with three-dimensional surface rebuilding.
In order to achieve the above object, two-dimensional image splicing provided by the invention comprises with the combined method of three-dimensional surface rebuilding the following step carried out in order:
Steps A: pre-service is carried out to the image of input, makes the unique point of image become abundanter, and then unique point is extracted;
Step B: the coupling of carrying out above-mentioned unique point, filters out the point set of successful matching, obtains the multi-view geometry corresponding relation of unique point;
Step C: the projective reconstruction relation being calculated scene by the corresponding relation of unique point, obtains video camera relative position in three dimensions and relative attitude;
Step D: carry out Metric reconstruction and carry out overall optimization;
Step e: according to video camera relative position relation in three dimensions, choose suitable visual angle and viewing plane, sets up the projection relation of all images on this viewing plane;
Step F: image mosaic, remaps to objective plane by all images, and splices the image after again projecting, and obtains the panoramic picture of two dimension;
Step G: three-dimensional surface rebuilding, namely calculates the video camera relative position in three dimensions of gained by step C and relative attitude restores image scene in three dimensions.
In step, describedly pretreated method carried out to image be:
1) reading images frame by frame;
2) if known camera parameters, image distortion correction is carried out; Otherwise directly enter step 3);
3) gaussian filtering method is adopted to carry out noise reduction process to image;
4) coloured image is converted to gray level image;
5) brightness or the intensity profile situation of above-mentioned gray level image is analyzed;
6) according to the Luminance Distribution situation of above-mentioned analysis result adjustment image, the gray scale of image is distributed between 0-255 comparatively equably.
In step, describedly to the method that unique point extracts be:
1) SURF/ (A) KAZE algorithm is adopted to carry out feature point extraction to above-mentioned pretreated image;
2) with SURF/ (A) KAZE feature extraction rreturn value be unique point vector, physical significance is the direction of gradient and the distribution situation of size; The direction defining gradient weight secondary maximum corresponding is auxiliary direction;
3) according to the auxiliary direction of above-mentioned unique point vector to unique point ascending sort; Wherein the direction of unique point is turned to 8 values by discrete, and the corresponding angle of each value is the sector region of 45 °; The account form of angle is the angle difference of auxiliary direction and principal direction; Afterwards unique point set is divided into 8 classifications according to the ascending order of the auxiliary direction of unique point vector;
4) construct class Hash list data structure, wherein each side chain of tables of data stores the identical unique point of auxiliary direction.
In stepb, the matching process of described unique point is:
1) determine the precision d_max mated, the Euclidean distance between the feature point pairs namely mutually mated must be less than d_max;
2) unique point is inputted as point to be matched;
3) read the auxiliary direction of the unique point vector inputted, calculate the side chain at impact point place;
4) calculate and the Euclidean distance of all unique points in this side chain, choose minimum with the Euclidean distance of point to be matched and its distance is less than the unique point of d_max as match point;
5) if there is multiple match point and point to be matched apart from close situation, abandon the coupling of current signature point, directly enter the coupling of next unique point.
In step C, the described method setting up projective reconstruction relation is:
1) with X
irepresentation space point, P
jrepresent video camera,
representation space point X
iat video camera P
jmiddle imaging, that is:
There is a mapping
same spatial point X is described
iat video camera P
j, P
kthe corresponding relation of middle imaging:
2) utilize the projective reconstruction relation minimizing cost function and random consistent (RANSAC) method calculating scene of sampling, namely solve
and calculate X by triangle principle
i.
In step D, the described method of carrying out Metric reconstruction is:
1) if comprise the Intrinsic Matrix of video camera in the data of input, directly Metric reconstruction can be carried out; Otherwise, solve self-calibration equation by the corresponding relation between the dual curve of absolute conic and the dual graph picture of absolute quadric and obtain the intrinsic parameter of video camera;
2) bundle adjustment: note
be respectively the spatial point that estimates and video camera position in space and attitude, choose re-projection error as cost function, that is:
With above-mentioned reconstruction result for initial value, carry out iteration to find out the optimal value minimizing cost function.
In step e, described according to video camera relative position relation in three dimensions, choose suitable visual angle and viewing plane, the concrete grammar setting up the projection relation of all images on this viewing plane is:
1) image mosaic plane is chosen as sightingpiston;
2) the video camera relative attitude position relationship utilizing above-mentioned Metric reconstruction to set up, image re-projection to be spliced is carried out image aspects conversion to image mosaic plane, by all image projection to identical yardstick, namely keep the adjoining dimensions of same target in different images.
In step F, the concrete grammar of described image mosaic is:
1) above-mentioned image mosaic after view transformation is become a figure greatly;
2) if there is in above-mentioned overlapping region the situation that pixel does not overlap, the position of random consistent sampling algorithm determination pixel is adopted; Namely multiple pixel center is calculated, and according to this central row except departing from the excessive point in center;
3) for the unique point of losing in image, the mode of interpolation is adopted to determine its value;
4) carry out brightness adjustment and the setting contrast of local, make the image of splicing gained more natural.
In step G, the concrete grammar of described three-dimensional surface rebuilding is:
1) according to the Metric reconstruction relation calculated in step D, by the unique point re-projection of image to three dimensions;
2) " a bit throw " phenomenon eliminating that wrong model error and noise bring more; For multiple positions that same projecting characteristic points goes out, ask weighted mean, and using the position as this unique point, the position that obtains;
3) build the triangulation in the triangulated mesh of vertical image and three dimensions jointly by feature point set, and set up the corresponding relation between grid;
4) by the mode of image co-registration, the triangle gridding hole in image completion three dimensions surface is used.
Two-dimensional image provided by the invention splicing combines the two-dimentional splicing of image and the three-dimensional surface rebuilding technology of image first with three-dimensional surface rebuilding combined method, can realize the image sequence do not demarcated by one group or one section of video and obtain the panoramic picture of scene and the relevant information of three-dimensional surface.The method has taken into full account the two dimension splicing of image and the similarities and differences of three-dimensional surface rebuilding technology, and the common steps in both extractions implementation procedure, can obtain panoramic picture and the three-dimensional surface of scene within the relatively short time simultaneously.Relative to traditional two-dimensional image joining method, because the bundle adjustment in three-dimensional reconstruction has carried out overall optimization to the attitude of video camera, use the combined method of image mosaic and three-dimensional reconstruction effectively can reduce the cumulative errors of calculating.On the other hand, the application of information in the splicing of image of three-dimensional computations gained makes the splicing visual angle of image to regulate.
Another advantage of the present invention is characteristic point matching method at a high speed.Consider that the time cost required for sequence/classification of data calculates Feature point correspondence relation relatively less, in the present invention, unique point set sorted and classify, the time complexity mated is reduced significantly.In theory, the time complexity of the characteristic point matching method of the present invention's use is only original 12.5%.
Accompanying drawing explanation
Fig. 1 is the combined method process flow diagram of two-dimensional image provided by the invention splicing and three-dimensional surface rebuilding.
Embodiment
Be described in detail below in conjunction with the combined method of the drawings and specific embodiments to two-dimensional image splicing provided by the invention and three-dimensional surface rebuilding.
Two-dimensional image provided by the invention splicing and the object of the combined method process of three-dimensional surface rebuilding be image sequence or video segment in any one.Because the consecutive frame in video segment is closely similar, therefore the mode of sampling is used to carry out image zooming-out for video, employing rate makes corresponding adjustment according to the movement rate of video camera, only need meet between two adjacent samplings and have certain overlapping region.For image sequence, each process image.
As shown in Figure 1, two-dimensional image splicing provided by the invention comprises with the combined method of three-dimensional surface rebuilding the following step carried out in order:
Steps A: pre-service is carried out to the image of input, makes the unique point of image become abundanter, and then unique point is extracted;
Described pretreated method is carried out to image comprise the following steps:
1) reading images frame by frame;
2) if known camera parameters, image distortion correction is carried out; Otherwise directly enter step 3);
3) gaussian filtering method is adopted to carry out noise reduction process to image;
4) coloured image is converted to gray level image;
5) brightness or the intensity profile situation of above-mentioned gray level image is analyzed;
6) according to the Luminance Distribution situation of above-mentioned analysis result adjustment image, the gray scale of image is distributed between 0-255 comparatively equably.
Described to the method that unique point extracts is:
1) SURF/ (A) KAZE algorithm is adopted to carry out feature point extraction to above-mentioned pretreated image;
2) with SURF/ (A) KAZE feature extraction rreturn value be unique point vector, physical significance is the direction of gradient and the distribution situation of size; The direction defining gradient weight secondary maximum corresponding is auxiliary direction;
3) according to the auxiliary direction of above-mentioned unique point vector to unique point ascending sort; Wherein the direction of unique point is turned to 8 values by discrete, and the corresponding angle of each value is the sector region of 45 °; The account form of angle is the angle difference of auxiliary direction and principal direction; Afterwards unique point set is divided into 8 classifications according to the ascending order of the auxiliary direction of unique point vector;
4) construct class Hash list data structure, wherein each side chain of tables of data stores the identical unique point of auxiliary direction.
Step B: the coupling of carrying out above-mentioned unique point, filters out the point set of successful matching, obtains the multi-view geometry corresponding relation of unique point;
The matching process of described unique point is:
1) determine the precision d_max mated, the Euclidean distance between the feature point pairs namely mutually mated must be less than d_max;
2) unique point is inputted as point to be matched;
3) read the auxiliary direction of the unique point vector inputted, calculate the side chain at impact point place;
4) calculate and the Euclidean distance of all unique points in this side chain, choose minimum with the Euclidean distance of point to be matched and its distance is less than the unique point of d_max as match point;
5) if there is multiple match point and point to be matched apart from close situation, abandon the coupling of current signature point, directly enter the coupling of next unique point.
Step C: the projective reconstruction relation being calculated scene by the corresponding relation of unique point, obtains video camera relative position in three dimensions and relative attitude;
Described projective reconstruction relational approach of setting up is:
1) with X
irepresentation space point, P
jrepresent video camera,
representation space point X
iat video camera P
jmiddle imaging, that is:
There is a mapping
Same spatial point X is described
iat video camera P
j, P
kthe corresponding relation of middle imaging:
2) utilize the projective reconstruction relation minimizing cost function and random consistent (RANSAC) method calculating scene of sampling, namely solve
and calculate X by triangle principle
i.
Step D: carry out Metric reconstruction and carry out overall optimization;
The described method of carrying out Metric reconstruction is:
1) Metric reconstruction need obtain the Intrinsic Matrix of video camera.If comprise the Intrinsic Matrix of video camera in the data of input, directly Metric reconstruction can be carried out; Otherwise, solve self-calibration equation by the corresponding relation between the dual curve of absolute conic and the dual graph picture of absolute quadric and obtain the intrinsic parameter of video camera;
2) bundle adjustment: the process often reconstructing a width view all can produce error, repeatedly reconstructs and will cause the accumulation of error.In order to reduce cumulative errors, need the operation carrying out global optimization.Note
be respectively the spatial point that estimates and video camera position in space and attitude, choose re-projection error as cost function, that is:
With above-mentioned reconstruction result for initial value, carry out iteration to find out the optimal value minimizing cost function.
Step e: according to video camera relative position relation in three dimensions, choose suitable visual angle and viewing plane, sets up the projection relation of all images on this viewing plane;
Concrete grammar is:
1) image mosaic plane is chosen as sightingpiston;
2) the video camera relative attitude position relationship utilizing above-mentioned Metric reconstruction to set up, image re-projection to be spliced is carried out image aspects conversion to image mosaic plane, the impact of scale factor should be considered in the process, by all image projection to identical yardstick, namely keep the adjoining dimensions of same target in different images;
Step F: image mosaic, remaps to objective plane by all images, and splices the image after again projecting, and obtains the panoramic picture of two dimension;
Concrete grammar is:
1) above-mentioned image mosaic after view transformation is become a figure greatly;
2) if there is in above-mentioned overlapping region the situation that pixel does not overlap, the position of random consistent sampling algorithm determination pixel is adopted; Namely multiple pixel center is calculated, and according to this central row except departing from the excessive point in center;
3) for the unique point of losing in image, the mode of interpolation is adopted to determine its value;
4) carry out brightness adjustment and the setting contrast of local, make the image of splicing gained more natural.
Step G: three-dimensional surface rebuilding, namely calculates the video camera relative position in three dimensions of gained by step C and relative attitude restores image scene in three dimensions.
Concrete grammar is:
1) according to the Metric reconstruction relation calculated in step D, by the unique point re-projection of image to three dimensions;
2) " a bit throw " phenomenon eliminating that wrong model error and noise bring more; For multiple positions that same projecting characteristic points goes out, ask weighted mean, and using the position as this unique point, the position that obtains;
3) build the triangulation in the triangulated mesh of vertical image and three dimensions jointly by feature point set, and set up the corresponding relation between grid;
4) by the mode of image co-registration, the triangle gridding hole in image completion three dimensions surface is used.
Claims (9)
1. two-dimensional image splicing and a combined method for three-dimensional surface rebuilding, is characterized in that: described combined method comprises the following step carried out in order:
Steps A: pre-service is carried out to the image of input, makes the unique point of image become abundanter, and then unique point is extracted;
Step B: the coupling of carrying out above-mentioned unique point, filters out the point set of successful matching, obtains the multi-view geometry corresponding relation of unique point;
Step C: the projective reconstruction relation being calculated scene by the corresponding relation of unique point, obtains video camera relative position in three dimensions and relative attitude;
Step D: carry out Metric reconstruction and carry out overall optimization;
Step e: according to video camera relative position relation in three dimensions, choose suitable visual angle and viewing plane, sets up the projection relation of all images on this viewing plane;
Step F: image mosaic, remaps to objective plane by all images, and splices the image after again projecting, and obtains the panoramic picture of two dimension;
Step G: three-dimensional surface rebuilding, namely calculates the video camera relative position in three dimensions of gained by step C and relative attitude restores image scene in three dimensions.
2. two-dimensional image according to claim 1 splicing and the combined method of three-dimensional surface rebuilding, is characterized in that: in step, described carry out pretreated method to image and be:
1) reading images frame by frame;
2) if known camera parameters, image distortion correction is carried out; Otherwise directly enter step 3);
3) gaussian filtering method is adopted to carry out noise reduction process to image;
4) coloured image is converted to gray level image;
5) brightness or the intensity profile situation of above-mentioned gray level image is analyzed;
6) according to the Luminance Distribution situation of above-mentioned analysis result adjustment image, the gray scale of image is distributed between 0-255 comparatively equably.
3. the combined method of two-dimensional image splicing according to claim 1 and three-dimensional surface rebuilding, it is characterized in that: in step, described to the method that unique point extracts is:
1) SURF/ (A) KAZE algorithm is adopted to carry out feature point extraction to above-mentioned pretreated image;
2) with SURF/ (A) KAZE feature extraction rreturn value be unique point vector, physical significance is the direction of gradient and the distribution situation of size; The direction defining gradient weight secondary maximum corresponding is auxiliary direction;
3) according to the auxiliary direction of above-mentioned unique point vector to unique point ascending sort; Wherein the direction of unique point is turned to 8 values by discrete, and the corresponding angle of each value is the sector region of 45 °; The account form of angle is the angle difference of auxiliary direction and principal direction; Afterwards unique point set is divided into 8 classifications according to the ascending order of the auxiliary direction of unique point vector;
4) construct class Hash list data structure, wherein each side chain of tables of data stores the identical unique point of auxiliary direction.
4. the combined method of two-dimensional image splicing according to claim 1 and three-dimensional surface rebuilding, it is characterized in that: in stepb, the matching process of described unique point is:
1) determine the precision d_max mated, the Euclidean distance between the feature point pairs namely mutually mated must be less than d_max;
2) unique point is inputted as point to be matched;
3) read the auxiliary direction of the unique point vector inputted, calculate the side chain at impact point place;
4) calculate and the Euclidean distance of all unique points in this side chain, choose minimum with the Euclidean distance of point to be matched and its distance is less than the unique point of d_max as match point;
5) if there is multiple match point and point to be matched apart from close situation, abandon the coupling of current signature point, directly enter the coupling of next unique point.
5. the combined method of two-dimensional image splicing according to claim 1 and three-dimensional surface rebuilding, it is characterized in that: in step C, the described method setting up projective reconstruction relation is:
1) with X
irepresentation space point, P
jrepresent video camera,
representation space point X
iat video camera P
jmiddle imaging, that is:
There is a mapping
same spatial point X is described
iat video camera P
j, P
kthe corresponding relation of middle imaging:
2) utilize the projective reconstruction relation minimizing cost function and random consistent (RANSAC) method calculating scene of sampling, namely solve
and calculate X by triangle principle
i.
6. the combined method of two-dimensional image splicing according to claim 1 and three-dimensional surface rebuilding, it is characterized in that: in step D, the described method of carrying out Metric reconstruction is:
1) if comprise the Intrinsic Matrix of video camera in the data of input, directly Metric reconstruction can be carried out; Otherwise, solve self-calibration equation by the corresponding relation between the dual curve of absolute conic and the dual graph picture of absolute quadric and obtain the intrinsic parameter of video camera;
2) bundle adjustment: note
be respectively the spatial point that estimates and video camera position in space and attitude, choose re-projection error as cost function, that is:
With above-mentioned reconstruction result for initial value, carry out iteration to find out the optimal value minimizing cost function.
7. the combined method of two-dimensional image splicing according to claim 1 and three-dimensional surface rebuilding, it is characterized in that: in step e, described according to video camera relative position relation in three dimensions, choose suitable visual angle and viewing plane, the concrete grammar setting up the projection relation of all images on this viewing plane is:
1) image mosaic plane is chosen as sightingpiston;
2) the video camera relative attitude position relationship utilizing above-mentioned Metric reconstruction to set up, image re-projection to be spliced is carried out image aspects conversion to image mosaic plane, by all image projection to identical yardstick, namely keep the adjoining dimensions of same target in different images.
8. the combined method of two-dimensional image splicing according to claim 1 and three-dimensional surface rebuilding, it is characterized in that: in step F, the concrete grammar of described image mosaic is:
1) above-mentioned image mosaic after view transformation is become a figure greatly;
2) if there is in above-mentioned overlapping region the situation that pixel does not overlap, the position of random consistent sampling algorithm determination pixel is adopted; Namely multiple pixel center is calculated, and according to this central row except departing from the excessive point in center;
3) for the unique point of losing in image, the mode of interpolation is adopted to determine its value;
4) carry out brightness adjustment and the setting contrast of local, make the image of splicing gained more natural.
9. the combined method of two-dimensional image splicing according to claim 1 and three-dimensional surface rebuilding, it is characterized in that: in step G, the concrete grammar of described three-dimensional surface rebuilding is:
1) according to the Metric reconstruction relation calculated in step D, by the unique point re-projection of image to three dimensions;
2) " a bit throw " phenomenon eliminating that wrong model error and noise bring more; For multiple positions that same projecting characteristic points goes out, ask weighted mean, and using the position as this unique point, the position that obtains;
3) build the triangulation in the triangulated mesh of vertical image and three dimensions jointly by feature point set, and set up the corresponding relation between grid;
4) by the mode of image co-registration, the triangle gridding hole in image completion three dimensions surface is used.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510752244.XA CN105303615A (en) | 2015-11-06 | 2015-11-06 | Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510752244.XA CN105303615A (en) | 2015-11-06 | 2015-11-06 | Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105303615A true CN105303615A (en) | 2016-02-03 |
Family
ID=55200832
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510752244.XA Pending CN105303615A (en) | 2015-11-06 | 2015-11-06 | Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105303615A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446910A (en) * | 2016-09-12 | 2017-02-22 | 电子科技大学 | Complex geological curved surface feature extraction and reconstruction method |
CN106506464A (en) * | 2016-10-17 | 2017-03-15 | 武汉秀宝软件有限公司 | A kind of toy exchange method and system based on augmented reality |
CN107644394A (en) * | 2016-07-21 | 2018-01-30 | 完美幻境(北京)科技有限公司 | A kind of processing method and processing device of 3D rendering |
CN107992190A (en) * | 2017-10-19 | 2018-05-04 | 中国船舶工业系统工程研究院 | Real-time Simulation operating system and control method based on projection fusion and image recognition |
CN108010079A (en) * | 2017-10-19 | 2018-05-08 | 中国船舶工业系统工程研究院 | Status information remote monitoring system and method based on projection fusion and image recognition |
CN108008813A (en) * | 2017-10-19 | 2018-05-08 | 中国船舶工业系统工程研究院 | A kind of interactive system and control method based on projection fusion and image recognition |
CN108961384A (en) * | 2017-05-19 | 2018-12-07 | 中国科学院苏州纳米技术与纳米仿生研究所 | three-dimensional image reconstruction method |
CN109003250A (en) * | 2017-12-20 | 2018-12-14 | 罗普特(厦门)科技集团有限公司 | A kind of image and threedimensional model fusion method |
WO2019085760A1 (en) * | 2017-11-01 | 2019-05-09 | 欧阳聪星 | Image processing method and apparatus |
CN110211025A (en) * | 2019-04-25 | 2019-09-06 | 北京理工大学 | For the bundle adjustment method of image mosaic, storage medium and calculate equipment |
CN110443838A (en) * | 2019-07-09 | 2019-11-12 | 中山大学 | A kind of associated diagram building method for stereo-picture splicing |
CN110458896A (en) * | 2019-08-07 | 2019-11-15 | 成都索贝数码科技股份有限公司 | A kind of camera internal reference method for solving and system based on absolute quadric |
WO2019242385A1 (en) * | 2018-06-19 | 2019-12-26 | 周超强 | Smart controllable hair dryer |
CN111383337A (en) * | 2020-03-20 | 2020-07-07 | 北京百度网讯科技有限公司 | Method and device for identifying objects |
WO2020207512A1 (en) * | 2019-04-12 | 2020-10-15 | 北京城市网邻信息技术有限公司 | Three-dimensional object modeling method, image processing method, and image processing device |
CN111833447A (en) * | 2020-07-13 | 2020-10-27 | Oppo广东移动通信有限公司 | Three-dimensional map construction method, three-dimensional map construction device and terminal equipment |
CN113483771A (en) * | 2021-06-30 | 2021-10-08 | 北京百度网讯科技有限公司 | Method, device and system for generating live-action map |
CN115294748A (en) * | 2022-09-08 | 2022-11-04 | 广东中科凯泽信息科技有限公司 | Fixed target disappearance early warning method based on visual data analysis |
CN117315152A (en) * | 2023-09-27 | 2023-12-29 | 杭州一隅千象科技有限公司 | Binocular stereoscopic imaging method and binocular stereoscopic imaging system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866482A (en) * | 2010-06-21 | 2010-10-20 | 清华大学 | Panorama splicing method based on camera self-calibration technology, and device thereof |
CN103456038A (en) * | 2013-08-19 | 2013-12-18 | 华中科技大学 | Method for rebuilding three-dimensional scene of downhole environment |
CN103646424A (en) * | 2013-11-26 | 2014-03-19 | 北京空间机电研究所 | Aerial seamless virtual roaming system constructing method |
CN103971353A (en) * | 2014-05-14 | 2014-08-06 | 大连理工大学 | Splicing method for measuring image data with large forgings assisted by lasers |
US20140285486A1 (en) * | 2013-03-20 | 2014-09-25 | Siemens Product Lifecycle Management Software Inc. | Image-based 3d panorama |
-
2015
- 2015-11-06 CN CN201510752244.XA patent/CN105303615A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866482A (en) * | 2010-06-21 | 2010-10-20 | 清华大学 | Panorama splicing method based on camera self-calibration technology, and device thereof |
US20140285486A1 (en) * | 2013-03-20 | 2014-09-25 | Siemens Product Lifecycle Management Software Inc. | Image-based 3d panorama |
CN103456038A (en) * | 2013-08-19 | 2013-12-18 | 华中科技大学 | Method for rebuilding three-dimensional scene of downhole environment |
CN103646424A (en) * | 2013-11-26 | 2014-03-19 | 北京空间机电研究所 | Aerial seamless virtual roaming system constructing method |
CN103971353A (en) * | 2014-05-14 | 2014-08-06 | 大连理工大学 | Splicing method for measuring image data with large forgings assisted by lasers |
Non-Patent Citations (2)
Title |
---|
张鸿燕: "基于视频图像的三维建模方法与系统——消化道三维建模方法与应用研究", 《中国科学院研究生院 博士学位论文》 * |
高大龙等: "基于列车前向运动视频的全景图拼接算法", 《山东大学学报(工学版)》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107644394A (en) * | 2016-07-21 | 2018-01-30 | 完美幻境(北京)科技有限公司 | A kind of processing method and processing device of 3D rendering |
CN107644394B (en) * | 2016-07-21 | 2021-03-30 | 完美幻境(北京)科技有限公司 | 3D image processing method and device |
CN106446910B (en) * | 2016-09-12 | 2020-10-27 | 电子科技大学 | Complex geological curved surface feature extraction and reconstruction method |
CN106446910A (en) * | 2016-09-12 | 2017-02-22 | 电子科技大学 | Complex geological curved surface feature extraction and reconstruction method |
CN106506464B (en) * | 2016-10-17 | 2019-11-12 | 武汉秀宝软件有限公司 | A kind of toy exchange method and system based on augmented reality |
CN106506464A (en) * | 2016-10-17 | 2017-03-15 | 武汉秀宝软件有限公司 | A kind of toy exchange method and system based on augmented reality |
CN108961384A (en) * | 2017-05-19 | 2018-12-07 | 中国科学院苏州纳米技术与纳米仿生研究所 | three-dimensional image reconstruction method |
CN108961384B (en) * | 2017-05-19 | 2021-11-30 | 中国科学院苏州纳米技术与纳米仿生研究所 | Three-dimensional image reconstruction method |
CN108010079A (en) * | 2017-10-19 | 2018-05-08 | 中国船舶工业系统工程研究院 | Status information remote monitoring system and method based on projection fusion and image recognition |
CN108010079B (en) * | 2017-10-19 | 2021-11-02 | 中国船舶工业系统工程研究院 | State information remote monitoring system and method based on projection fusion and image recognition |
CN107992190A (en) * | 2017-10-19 | 2018-05-04 | 中国船舶工业系统工程研究院 | Real-time Simulation operating system and control method based on projection fusion and image recognition |
CN108008813A (en) * | 2017-10-19 | 2018-05-08 | 中国船舶工业系统工程研究院 | A kind of interactive system and control method based on projection fusion and image recognition |
WO2019085760A1 (en) * | 2017-11-01 | 2019-05-09 | 欧阳聪星 | Image processing method and apparatus |
US11107188B2 (en) | 2017-11-01 | 2021-08-31 | Beijing Keeyoo Technologies Co., Ltd | Image processing method and device |
CN109003250B (en) * | 2017-12-20 | 2023-05-30 | 罗普特科技集团股份有限公司 | Fusion method of image and three-dimensional model |
CN109003250A (en) * | 2017-12-20 | 2018-12-14 | 罗普特(厦门)科技集团有限公司 | A kind of image and threedimensional model fusion method |
WO2019242385A1 (en) * | 2018-06-19 | 2019-12-26 | 周超强 | Smart controllable hair dryer |
WO2020207512A1 (en) * | 2019-04-12 | 2020-10-15 | 北京城市网邻信息技术有限公司 | Three-dimensional object modeling method, image processing method, and image processing device |
US11869148B2 (en) | 2019-04-12 | 2024-01-09 | Beijing Chengshi Wanglin Information Technology Co., Ltd. | Three-dimensional object modeling method, image processing method, image processing device |
CN110211025A (en) * | 2019-04-25 | 2019-09-06 | 北京理工大学 | For the bundle adjustment method of image mosaic, storage medium and calculate equipment |
CN110443838A (en) * | 2019-07-09 | 2019-11-12 | 中山大学 | A kind of associated diagram building method for stereo-picture splicing |
CN110443838B (en) * | 2019-07-09 | 2023-05-05 | 中山大学 | Associative graph construction method for stereoscopic image stitching |
CN110458896A (en) * | 2019-08-07 | 2019-11-15 | 成都索贝数码科技股份有限公司 | A kind of camera internal reference method for solving and system based on absolute quadric |
CN110458896B (en) * | 2019-08-07 | 2021-09-21 | 成都索贝数码科技股份有限公司 | Camera internal reference solving method and system based on absolute quadric surface |
CN111383337A (en) * | 2020-03-20 | 2020-07-07 | 北京百度网讯科技有限公司 | Method and device for identifying objects |
CN111383337B (en) * | 2020-03-20 | 2023-06-27 | 北京百度网讯科技有限公司 | Method and device for identifying objects |
CN111833447A (en) * | 2020-07-13 | 2020-10-27 | Oppo广东移动通信有限公司 | Three-dimensional map construction method, three-dimensional map construction device and terminal equipment |
CN113483771A (en) * | 2021-06-30 | 2021-10-08 | 北京百度网讯科技有限公司 | Method, device and system for generating live-action map |
CN113483771B (en) * | 2021-06-30 | 2024-01-30 | 北京百度网讯科技有限公司 | Method, device and system for generating live-action map |
CN115294748A (en) * | 2022-09-08 | 2022-11-04 | 广东中科凯泽信息科技有限公司 | Fixed target disappearance early warning method based on visual data analysis |
CN117315152A (en) * | 2023-09-27 | 2023-12-29 | 杭州一隅千象科技有限公司 | Binocular stereoscopic imaging method and binocular stereoscopic imaging system |
CN117315152B (en) * | 2023-09-27 | 2024-03-29 | 杭州一隅千象科技有限公司 | Binocular stereoscopic imaging method and binocular stereoscopic imaging system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105303615A (en) | Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image | |
Schops et al. | A multi-view stereo benchmark with high-resolution images and multi-camera videos | |
CN111815757B (en) | Large member three-dimensional reconstruction method based on image sequence | |
Schneider et al. | RegNet: Multimodal sensor registration using deep neural networks | |
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
CN105957007B (en) | Image split-joint method based on characteristic point plane similarity | |
CN107665479A (en) | A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium | |
WO2015139574A1 (en) | Static object reconstruction method and system | |
CN101394573B (en) | Panoramagram generation method and system based on characteristic matching | |
CN107578376B (en) | Image splicing method based on feature point clustering four-way division and local transformation matrix | |
US20130051685A1 (en) | Patch-based synthesis techniques | |
CN105069746A (en) | Video real-time human face substitution method and system based on partial affine and color transfer technology | |
CN108734657B (en) | Image splicing method with parallax processing capability | |
CN107679537A (en) | A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings | |
CN106919944A (en) | A kind of wide-angle image method for quickly identifying based on ORB algorithms | |
JP2007000205A (en) | Image processing apparatus, image processing method, and image processing program | |
CN104463899A (en) | Target object detecting and monitoring method and device | |
CN102903101B (en) | Method for carrying out water-surface data acquisition and reconstruction by using multiple cameras | |
CN102982524B (en) | Splicing method for corn ear order images | |
CN109711242A (en) | Modification method, device and the storage medium of lane line | |
CN103700082B (en) | Image split-joint method based on dual quaterion relative orientation | |
CN103733225B (en) | Characteristic point peer system, characteristic point counterpart method and record medium | |
CN103793891A (en) | Low-complexity panorama image joint method | |
Camposeco et al. | Non-parametric structure-based calibration of radially symmetric cameras | |
JP6558803B2 (en) | Geometric verification apparatus and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160203 |