CN108257089A - A kind of method of the big visual field video panorama splicing based on iteration closest approach - Google Patents

A kind of method of the big visual field video panorama splicing based on iteration closest approach Download PDF

Info

Publication number
CN108257089A
CN108257089A CN201810028354.5A CN201810028354A CN108257089A CN 108257089 A CN108257089 A CN 108257089A CN 201810028354 A CN201810028354 A CN 201810028354A CN 108257089 A CN108257089 A CN 108257089A
Authority
CN
China
Prior art keywords
view
frame
point
views
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810028354.5A
Other languages
Chinese (zh)
Other versions
CN108257089B (en
Inventor
袁丁
刘韬
张弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201810028354.5A priority Critical patent/CN108257089B/en
Publication of CN108257089A publication Critical patent/CN108257089A/en
Application granted granted Critical
Publication of CN108257089B publication Critical patent/CN108257089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of methods of the big visual field video panorama splicing based on iteration closest approach, and using the three-dimensional information of scene, a frame view is transformed under the same plane of delineation, realizes scene splicing.Specifically, to all adjacent two frames views, following operation is performed:1. extract and match the feature of adjacent two frames view;2. calculate the relative pose of adjacent two frames view;3. under epipolar-line constraint, dense matching is carried out to adjacent two frames view;4. according to dense matching as a result, calculating the threedimensional model of adjacent two frames view overlapping region;5. utilizing iteration closest approach method, the threedimensional model that step 4 obtains is transformed under the camera coordinates system of the 0th frame view;6. the threedimensional model after conversion is projected in the 0th frame view plane, so as to establish the mapping relations of position on each point and the 0th frame view on threedimensional model;7. the point for mapping the 0th frame view same position is merged, splicing is completed.The present invention is based on homography joining method compared to tradition, more true and reliable.

Description

Large-view-field video panorama stitching method based on iteration closest point
Technical Field
The invention relates to a method for splicing a large-view-field video panoramic picture based on an iteration closest point. Belongs to the field of computer vision.
Background
And (3) carrying out video panorama splicing, namely carrying out seamless splicing on each frame view with an overlapped area in a section of video to obtain a panorama image reflecting the overall appearance of a video scene. The video panorama stitching technology aims at videos obtained by common imaging equipment. Due to the popularization of digital cameras and smart phones, the panoramic image acquisition cost is greatly reduced, and the high-quality video panoramic image splicing technology also bursts huge market demands. In addition, the video panorama stitching technology is also widely applied to the fields of virtual reality, augmented reality and the like, has a very wide development prospect and has a very high research value.
Conventional video panorama stitching techniques are based on planar homography assumptions. Specifically, features with strong robustness are extracted and matched for two adjacent frames of views, a homography mapping matrix is estimated by using the features obtained through matching, then pixel points of one view are mapped into the other view through the homography matrix obtained through estimation, and gray values are fused (if the image is a color image, RGB three channels are respectively fused, the same is carried out below), and a splicing result is obtained. In the traditional method, during the splicing process of two views, all scenes are assumed to be located on the same plane. In reality, the assumption that the scenes are all located on the same plane is obviously not true. When the distance of the camera is far greater than the change of the depth of the scene, the change of the depth of the scene can be ignored, the scene is approximately considered to be located on the same plane, and at the moment, the effect of the traditional video panoramic stitching method is ideal. When the depth change of the scene cannot be ignored, the splicing result of the traditional method generates larger distortion, so that the traditional method is limited in practical application.
Disclosure of Invention
The invention solves the problems: the method overcomes the defects of the prior art, provides a method for splicing the panoramic of the large-view-field video based on the iteration closest point, accurately maps the pixels of different views to the same image plane by using the depth information of the scene, and fundamentally solves the problem of panoramic image distortion caused by plane homography assumption.
The technical scheme of the invention is as follows: a method for splicing a large-view-field video panorama based on an iteration closest point comprises the following steps:
(1) extracting and matching the characteristics of two adjacent frame views;
(2) calculating the relative pose of two adjacent frame views based on the features obtained in the step (1);
(3) obtaining limit constraint through calculation of relative pose, and performing dense matching on two adjacent frame views under polar constraint to obtain dense matching point pairs;
(4) calculating a three-dimensional model of the overlapping area of the two adjacent frames of views by using the dense matching point pairs obtained in the step (3);
(5) converting the three-dimensional model obtained in the step (4) to a camera coordinate system of a 0 th frame view, namely a world coordinate system, by adopting an iteration closest point method;
(6) projecting the three-dimensional model converted in the step (5) to a 0 th frame view, and establishing a mapping relation between each point on the three-dimensional model and a position on the 0 th frame view;
(7) and (4) fusing points at the same position of the 0 th frame of view on the basis of the mapping relation obtained in the step (6) to finish splicing.
The extraction and matching of the features of the two adjacent frame views in the step (1) are realized as follows:
(1) SIFT feature points are extracted from two adjacent frame views, and a descriptor of each feature point is calculated;
(2) and matching the features extracted from the two adjacent frame views based on the descriptors of the features, so as to obtain a plurality of matched feature point pairs between the two adjacent frame views.
And (4) calculating a three-dimensional model of the overlapping area of the two adjacent frames of views, wherein the following steps are realized:
according to the principle of photographic distance measurement, calculating three-dimensional points corresponding to the matching point pairs between the two frames of views obtained in the step (3), wherein the three-dimensional points jointly form a three-dimensional model of the overlapping area of the two adjacent frames of views, and the three-dimensional model is Mk Represents MkEach point in the middle, h is MkThe number of intermediate points, superscript k representing MkIs a dense pair of matching points, M, from the kth frame view and the (k + 1) th frame viewkAnd when the color of each point is the mean value of the color of the matching point when the three-dimensional point is calculated.
And (5) converting the three-dimensional model obtained in the step (4) into a camera coordinate system of the 0 th frame view by using an iterative closest point method, and realizing the following steps:
(11) the three-dimensional model corresponding to the overlapping area of the 0 th frame view and the 1 st frame view is M0Computing from M by an iterative closest point methodkTo M0By a transformation matrix RkAnd a translation vector TkThe description is carried out;
(12) applying optimal rigid body transformations to MkThe middle points are shown as a formula (1),
wherein,to be composed ofPoints after transformation to the world coordinate System, i for indexing Mk(orDefinition below); note the book Namely the handle MkAnd transforming to the result in the camera coordinate system of the 0 th frame view.
And (5) establishing a mapping relation between each point on the three-dimensional model and the position on the 0 th frame view in the step (6), and concretely realizing the following steps:
(21) will be provided withThe view of projecting each point in the image to the 0 th frame is shown as formula (2):
in the formula (2), pi (·) is perspective projection mapping, [ u, v ·]TIs composed ofCoordinates of the projection position in the 0 th frame view, where u and v areThe abscissa and ordinate of the projection in the 0 th frame view;
(22) combining formula (2) and formula (1) to obtain MkThe mapping is established between each point in the image and the position of the 0 th frame view, as shown in formula (3):
in step (7), the points at the same position of the view of the 0 th frame are fused, and the method is specifically realized as follows:
for all positions p of the 0 th frame view, the following is performed:
(31) finding out all points which establish corresponding relation with the position p through the step (6), and calculating the average value of the colors of the points which establish corresponding relation with the position p and recording the average value as C;
(32) the color at position p is assigned C.
Compared with the prior art, the invention has the advantages that: the invention considers the three-dimensional information of the scene during panoramic stitching. And calculating a three-dimensional model of an overlapping area of two adjacent frames of views, converting the three-dimensional model into a camera coordinate system of the 0 th frame of view, and then establishing accurate mapping from each point on the three-dimensional model to the position in the 0 th frame of view, thereby obtaining a more real and natural panoramic image. In the traditional method, all scenes are assumed to be located on the same plane, the mapping relation is established through the homography matrix, and the three-dimensional information of the scenes is ignored, so that the inaccurate mapping relation is obtained. When the depth of field changes severely, a large number of flaws are generated in the splicing process of the traditional method, and the obtained panoramic image with poor quality is obtained.
Drawings
FIG. 1 shows a flow of a large-scale video panorama stitching method based on an iterative closest point;
FIG. 2 shows the results of the panoramic stitching experiment performed on a certain video according to the present invention, (a) several frames are captured from the video used in the experiment; (b) the invention has the splicing result.
Detailed description of the preferred embodiments
The present invention will be described in detail below with reference to the accompanying drawings and examples. For convenience of description, the present invention indexes each frame view of the video using a symbol k, and the k frame view and the k +1 frame view are adjacent views.
As shown in fig. 1, the present invention is embodied as follows:
1. extracting and matching the characteristics of two adjacent frame views;
image features refer to certain pixels in a digital image that have a particular type of property. Each image feature is often associated with a descriptor (feature vector) whose role is to describe the feature. Common image features are FAST, HOG, SURF, SIFT, etc. Considering that the in-place pose solution calculation has higher requirements on the robustness of the features, the SIFT features are selected.
The basis for feature matching is a descriptor of the feature, in particular, theAndthe features extracted from the k-th frame view and the k + 1-th frame view are respectively obtained, wherein n and m are the number of the features in the k-th frame view and the k + 1-th frame view respectively. D (-) is a descriptor operator, thenAndrespectively areAndif the k frame looksFeatures in the drawings(0 ≦ l ≦ n-1) and features in the k +1 th frame viewIs a matching feature, thenAndthe condition shown in the formula (4) must be satisfied.
The symbol of | | · | | | in formula (4) represents the euclidean distance operator, and min (·) represents the minimum operator. After the hypothesis is matched, s groups of matching features can be obtained and are uniformly marked as (x)0,x′0),(x1,x′1),...,(xs-1,x′s-1)。
2. Calculating the relative pose of two adjacent frames of views
Recording the basic matrix of the k frame view relative to the k +1 frame view as F, the matching features obtained in the step 1 should satisfy the epipolar constraint equationx′tAnd xtAre homogeneous coordinates, t-0, 1.., s-1, used to index the matching features obtained in step 1. When s is more than or equal to 8, F can be estimated by a singular value decomposition method. And calculating a matrix E by taking the camera internal reference matrix as K and F obtained by estimation, decomposing singular values of the matrix E to obtain the pose of the K +1 frame view relative to the K frame view, and describing by a rotation matrix and a translation vector.
3. Under epipolar constraint, dense matching is carried out on two adjacent frame views
The dense matching aims to match the pixel points in the k frame view in the (k + 1) frame view as far as possible to obtain the corresponding pixel points under the condition of satisfying epipolar constraint. The matching is based on the characteristics of the pixel points. Note the bookAndrespectively representing homogeneous coordinates of pixel points in the view of the kth frame and the (k + 1) th frame, wherein,andare respectively asAbscissa and ordinate on the k frame view;andare respectively asAbscissa and ordinate on the (k + 1) th frame view; i and j are indexes of the k-th and k + 1-th frame view pixel points respectively. And marking the characteristic operator of the pixel point as V (·). To pairSearching and satisfying epipolar constraint in pixel points which are positioned in k +1 frame viewPixel point with nearest characteristicThe matching point is as shown in formula (5):
in the formula (5), argmin represents the operator for obtaining the minimum parameter. The second line of equation (5) is the limit constraint, which has the geometric meaning ofThe distance to the epipolar line is less than epsilon. And F represents a basic matrix, and the pose obtained in the step 2 is recalculated.
4. Calculating a three-dimensional model of an overlapping area of two adjacent frames of views by using the pixel points obtained by dense matching in the step 3;
through pixel points in the k-th frame viewCan obtain a light center of view starting from the k frame and pointing toThe ray of (a); similarly, according to the pose obtained in step 2, an optical center starting from the (k + 1) th frame view and pointing can be obtainedThe ray of the matching point. By solving the intersection point of the two raysCorresponding three-dimensional positionThe above calculationThe principle of (2) is called a photographic ranging principle.Is recorded as(As would be used in step 7),is defined asTo which the average of the dot grays is matched (if the process is color video,respectively of RGB channelsThe average of the RGB channels to which it matches). All the pixel points with the matching points in the view of the kth frame are converted into corresponding three-dimensional positions, so that a three-dimensional model of the view overlapping scene of the kth frame and the (k + 1) th frame can be obtained and recorded as a three-dimensional modelWherein h is MkThe number of included points).
5. Converting the three-dimensional model obtained in the step (4) to the same coordinate system by using an iteration closest point method;
for convenience of description, in the following description, the camera coordinate system corresponding to the 0 th frame view is referred to as the world coordinate system (i.e., M)0Coordinate system) of the object, the purpose of this step is to compare MkAnd converting into a world coordinate system. At this stepIn step 6, i represents Mk(orDefinition see below). Using an iterative closest point method, MkRegistration to M0To obtain a rotation matrix RkAnd a translation vector Tk。RkAnd TkIs to mix MkThe rigid body transformation of each point in the middle to the world coordinate system is shown in the formula (6),
wherein,to be composed ofPoints after transformation to the world coordinate system; note the book Namely the handle MkThe result of the transformation to the world coordinate system.
6. And projecting the three-dimensional model converted in the step 5 to the 0 th frame view, so as to establish the mapping relation between each point on the three-dimensional model and the position on the 0 th frame view.
To pairWill be provided withProjected into the first view, there are:
in the formula (4), pi (·) is perspective projection mapping, [ u, v · ]]TIs composed ofCoordinates of the projection position in the 0 th frame view, where u and v areAbscissa and ordinate of projection in frame 0 view. Handle of formula (7)Mapped to the 0 th frame view and has the coordinate of u, v]TThe position of (a). Bringing formula (6) into (7) to obtainView coordinates to frame 0 are [ u, v ]]TThe mapping of the position is shown in equation (8):
according to equation (8), i.e. M is establishedkMapping of points in to locations in the 0 th frame view.
7. And fusing all pixel points mapped to the 0 th frame view.
The number of the video frames is nframesN, 1,2,. for kframes-1, performing steps 1 to 6. Recording the coordinate of a certain pixel point in the 0 th frame view as [ u, v ]]TWherein u is an abscissa and v is an ordinate. Mapping all to 0 th frame view [ u, v ]]TThe set of points of a location is called u, v]TIs denoted as U (U, v). Let U (U, v) ═ X0,X1,...,Xr-1R is the number of points in U (U, v), if U (U, v) is not empty (i.e., r > 0), then [ U, v ] is]TThe gray scale (or color) of (b) is:
wherein, C (X)i) Represents XiIs determined (see step 4 for definition),for the 0 th frame view [ u, v ]]TThe color of (b) is, as defined by formula (9),is C (X)i) Average value of (a). And recalculating the gray levels of all the positions, which are not empty, of the support sets in the view of the frame 0 to obtain the gray levels of the corresponding positions, realizing splicing and obtaining the panoramic image.
FIG. 2 shows an experimental result of the present invention, (a) is a view of four frames taken from an experimental video; (b) the invention is a result of panoramic stitching of experimental videos. Therefore, the panoramic image spliced by the method reflects the overall appearance of the scene, and has natural transition and high reality degree.
The above examples are provided only for the purpose of describing the present invention, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent substitutions and modifications can be made without departing from the spirit and principles of the invention, and are intended to be within the scope of the invention.

Claims (6)

1. A method for splicing a panoramic image of a large-view-field video based on an iteration closest point is characterized by comprising the following steps:
(1) extracting and matching the characteristics of two adjacent frame views;
(2) calculating the relative pose of two adjacent frame views based on the features obtained in the step (1);
(3) obtaining limit constraint through calculation of relative pose, and performing dense matching on two adjacent frame views under polar constraint to obtain dense matching point pairs;
(4) calculating a three-dimensional model of the overlapping area of the two adjacent frames of views by using the dense matching point pairs obtained in the step (3);
(5) converting the three-dimensional model obtained in the step (4) to a camera coordinate system of a 0 th frame view, namely a world coordinate system, by adopting an iteration closest point method;
(6) projecting the three-dimensional model converted in the step (5) to a 0 th frame view, and establishing a mapping relation between each point on the three-dimensional model and a position on the 0 th frame view;
(7) and (4) fusing points at the same position of the 0 th frame of view on the basis of the mapping relation obtained in the step (6) to finish splicing.
2. The method for stitching a large-field-of-view video panorama based on iterative closest point as claimed in claim 1, wherein: the extraction and matching of the features of the two adjacent frame views in the step (1) are realized as follows:
(1) SIFT feature points are extracted from two adjacent frame views, and a descriptor of each feature point is calculated;
(2) and matching the features extracted from the two adjacent frame views based on the descriptors of the features, so as to obtain a plurality of matched feature point pairs between the two adjacent frame views.
3. The method for stitching a large-field-of-view video panorama based on iterative closest point as claimed in claim 1, wherein: and (4) calculating a three-dimensional model of the overlapping area of the two adjacent frames of views, wherein the following steps are realized:
according to the principle of photographic distance measurement, calculating three-dimensional points corresponding to the matching point pairs between the two frames of views obtained in the step (3), wherein the three-dimensional points jointly form a three-dimensional model of the overlapping area of the two adjacent frames of views, and the three-dimensional model is Mk Represents MkEach point in the middle, h is MkThe number of intermediate points, superscript k representing MkIs a dense pair of matching points, M, from the kth frame view and the (k + 1) th frame viewkWhen the color of each point is the three-dimensional point, the matching pointFor the mean value of the colors.
4. The method for stitching a large-field-of-view video panorama based on iterative closest point as claimed in claim 1, wherein: and (5) converting the three-dimensional model obtained in the step (4) into a camera coordinate system of the 0 th frame view by using an iterative closest point method, and realizing the following steps:
(11) the three-dimensional model corresponding to the overlapping area of the 0 th frame view and the 1 st frame view is M0Computing from M by an iterative closest point methodkTo M0By a transformation matrix RkAnd a translation vector TkThe description is carried out;
(12) applying optimal rigid body transformations to MkThe middle points are shown as a formula (1),
wherein,to be composed ofPoints after transformation to the world coordinate System, i for indexing Mk(orDefinition below); note the bookNamely the handle MkAnd transforming to the result in the camera coordinate system of the 0 th frame view.
5. The method for stitching a large-field-of-view video panorama based on iterative closest point as claimed in claim 1, wherein: and (5) establishing a mapping relation between each point on the three-dimensional model and the position on the 0 th frame view in the step (6), and concretely realizing the following steps:
(21) will be provided withThe view of projecting each point in the image to the 0 th frame is shown as formula (2):
in the formula (2), pi (·) is perspective projection mapping, [ u, v ·]TIs composed ofCoordinates of the projection position in the 0 th frame view, where u and v areThe abscissa and ordinate of the projection in the 0 th frame view;
(22) combining formula (2) and formula (1) to obtain MkThe mapping is established between each point in the image and the position of the 0 th frame view, as shown in formula (3):
6. the method for stitching a large-field-of-view video panorama based on iterative closest point as claimed in claim 1, wherein: in step (7), the points at the same position of the view of the 0 th frame are fused, and the method is specifically realized as follows:
for all positions p of the 0 th frame view, the following is performed:
(31) finding out all points which establish corresponding relation with the position p through the step (6), and calculating the average value of the colors of the points which establish corresponding relation with the position p and recording the average value as C;
(32) the color at position p is assigned C.
CN201810028354.5A 2018-01-12 2018-01-12 A method of the big visual field video panorama splicing based on iteration closest approach Active CN108257089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810028354.5A CN108257089B (en) 2018-01-12 2018-01-12 A method of the big visual field video panorama splicing based on iteration closest approach

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810028354.5A CN108257089B (en) 2018-01-12 2018-01-12 A method of the big visual field video panorama splicing based on iteration closest approach

Publications (2)

Publication Number Publication Date
CN108257089A true CN108257089A (en) 2018-07-06
CN108257089B CN108257089B (en) 2019-01-08

Family

ID=62726564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810028354.5A Active CN108257089B (en) 2018-01-12 2018-01-12 A method of the big visual field video panorama splicing based on iteration closest approach

Country Status (1)

Country Link
CN (1) CN108257089B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN111105347A (en) * 2019-11-19 2020-05-05 贝壳技术有限公司 Method, device and storage medium for generating panoramic image with depth information
CN111242990A (en) * 2020-01-06 2020-06-05 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN113052761A (en) * 2019-12-26 2021-06-29 炬星科技(深圳)有限公司 Laser point cloud map fusion method, device and computer readable storage medium
US11055835B2 (en) 2019-11-19 2021-07-06 Ke.com (Beijing) Technology, Co., Ltd. Method and device for generating virtual reality data
CN113160053A (en) * 2021-04-01 2021-07-23 华南理工大学 Pose information-based underwater video image restoration and splicing method
CN114143528A (en) * 2020-09-04 2022-03-04 北京大视景科技有限公司 Multi-video stream fusion method, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100033553A1 (en) * 2008-08-08 2010-02-11 Zoran Corporation In-camera panorama image stitching assistance
CN104484648A (en) * 2014-11-27 2015-04-01 浙江工业大学 Variable-viewing angle obstacle detection method for robot based on outline recognition
US20150346115A1 (en) * 2014-05-30 2015-12-03 Eric J. Seibel 3d optical metrology of internal surfaces
CN105279789A (en) * 2015-11-18 2016-01-27 中国兵器工业计算机应用技术研究所 A three-dimensional reconstruction method based on image sequences
CN105374011A (en) * 2015-12-09 2016-03-02 中电科信息产业有限公司 Panoramic image based point cloud data splicing method and apparatus
CN107169924A (en) * 2017-06-14 2017-09-15 歌尔科技有限公司 The method for building up and system of three-dimensional panoramic image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100033553A1 (en) * 2008-08-08 2010-02-11 Zoran Corporation In-camera panorama image stitching assistance
US20150346115A1 (en) * 2014-05-30 2015-12-03 Eric J. Seibel 3d optical metrology of internal surfaces
CN104484648A (en) * 2014-11-27 2015-04-01 浙江工业大学 Variable-viewing angle obstacle detection method for robot based on outline recognition
CN105279789A (en) * 2015-11-18 2016-01-27 中国兵器工业计算机应用技术研究所 A three-dimensional reconstruction method based on image sequences
CN105374011A (en) * 2015-12-09 2016-03-02 中电科信息产业有限公司 Panoramic image based point cloud data splicing method and apparatus
CN107169924A (en) * 2017-06-14 2017-09-15 歌尔科技有限公司 The method for building up and system of three-dimensional panoramic image

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHRYSANTHOU Y等: "Image-Based registration of3D-range data using feature surface elements", 《ARCHAEOLOGY AND INTELLIGENT CULTURAL HERITAGE》 *
JUN CHU等: "《Multi-view point clouds registration and stitching based on SIFT feature》", 《HTTPS://WWW.RESEARCHGATE.NET/PUBLICATION/251999308》 *
储珺等: "基于 SIFT特征的多视点云数据配准和拼接算法", 《光电技术应用》 *
吕耀文等: "基于双目视觉的三维重建和拼接技术研究", 《光电子技术》 *
耿晓玲: "大视场视频全景图拼接方法研究", 《万方学位论文全文数据库》 *
赵阳: "三维全景图像生成的若干关键技术研究", 《中国优秀硕士学位论文全文数据库 工程科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN110874818B (en) * 2018-08-31 2023-06-23 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
US11055835B2 (en) 2019-11-19 2021-07-06 Ke.com (Beijing) Technology, Co., Ltd. Method and device for generating virtual reality data
CN111105347B (en) * 2019-11-19 2020-11-13 贝壳找房(北京)科技有限公司 Method, device and storage medium for generating panoramic image with depth information
CN111105347A (en) * 2019-11-19 2020-05-05 贝壳技术有限公司 Method, device and storage medium for generating panoramic image with depth information
US11721006B2 (en) 2019-11-19 2023-08-08 Realsee (Beijing) Technology Co., Ltd. Method and device for generating virtual reality data
CN113052761A (en) * 2019-12-26 2021-06-29 炬星科技(深圳)有限公司 Laser point cloud map fusion method, device and computer readable storage medium
WO2021129349A1 (en) * 2019-12-26 2021-07-01 炬星科技(深圳)有限公司 Laser point cloud map merging method, apparatus, and computer readable storage medium
CN113052761B (en) * 2019-12-26 2024-01-30 炬星科技(深圳)有限公司 Laser point cloud map fusion method, device and computer readable storage medium
CN111242990A (en) * 2020-01-06 2020-06-05 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN111242990B (en) * 2020-01-06 2024-01-30 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN114143528A (en) * 2020-09-04 2022-03-04 北京大视景科技有限公司 Multi-video stream fusion method, electronic device and storage medium
CN113160053A (en) * 2021-04-01 2021-07-23 华南理工大学 Pose information-based underwater video image restoration and splicing method
CN113160053B (en) * 2021-04-01 2022-06-14 华南理工大学 Pose information-based underwater video image restoration and splicing method

Also Published As

Publication number Publication date
CN108257089B (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN108257089B (en) A method of the big visual field video panorama splicing based on iteration closest approach
CN111047510B (en) Large-field-angle image real-time splicing method based on calibration
CN110390640B (en) Template-based Poisson fusion image splicing method, system, equipment and medium
CN105933678B (en) More focal length lens linkage imaging device based on Multiobjective Intelligent tracking
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
Gurdjos et al. Methods and geometry for plane-based self-calibration
CN103345736A (en) Virtual viewpoint rendering method
CN107578376B (en) Image splicing method based on feature point clustering four-way division and local transformation matrix
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
Dellepiane et al. Flow-based local optimization for image-to-geometry projection
CN110070598A (en) Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding
CN105894443A (en) Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm
Xu et al. Layout-guided novel view synthesis from a single indoor panorama
CN108093188B (en) A method of the big visual field video panorama splicing based on hybrid projection transformation model
CN113538569A (en) Weak texture object pose estimation method and system
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
CN107330856B (en) Panoramic imaging method based on projective transformation and thin plate spline
CN111681271B (en) Multichannel multispectral camera registration method, system and medium
Lee et al. High-quality depth estimation using an exemplar 3d model for stereo conversion
CN107067368B (en) Streetscape image splicing method and system based on deformation of image
CN117333659A (en) Multi-target detection method and system based on multi-camera and camera
CN116823895A (en) Variable template-based RGB-D camera multi-view matching digital image calculation method and system
CN114399423B (en) Image content removing method, system, medium, device and data processing terminal
CN115482339A (en) Face facial feature map generating method
Uzpak et al. Style transfer for keypoint matching under adverse conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant