CN108280858A - A kind of linear global camera motion method for parameter estimation in multiple view reconstruction - Google Patents

A kind of linear global camera motion method for parameter estimation in multiple view reconstruction Download PDF

Info

Publication number
CN108280858A
CN108280858A CN201810085740.8A CN201810085740A CN108280858A CN 108280858 A CN108280858 A CN 108280858A CN 201810085740 A CN201810085740 A CN 201810085740A CN 108280858 A CN108280858 A CN 108280858A
Authority
CN
China
Prior art keywords
camera
matrix
absolute
global
parameter estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810085740.8A
Other languages
Chinese (zh)
Other versions
CN108280858B (en
Inventor
秦红星
胡闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jinlaojin Information Technology Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201810085740.8A priority Critical patent/CN108280858B/en
Publication of CN108280858A publication Critical patent/CN108280858A/en
Application granted granted Critical
Publication of CN108280858B publication Critical patent/CN108280858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Abstract

A kind of a kind of linear global camera motion method for parameter estimation in being rebuild the present invention relates to multiple view, belongs to multi-view geometry and three-dimensional reconstruction field, comprises the following steps:S1:Multiple images are inputted, multiple images of fixed scene are matched two-by-two;S2:All cameras are placed under the same coordinate system with global optimization method using the theory based on multi-view geometry and are optimized;S3:Calculate the absolute rotation of all cameras;S4:Absolute rotation using camera and point matching, the absolute translational vector of camera is calculated according to epipolar geometry constraints in multi-view geometry.The method of the present invention can handle the reconstruction application of large scene, while the influence of noise is smaller, calculate simply, do not need additional other information, greatly improve the precision of kinematic parameter.

Description

A kind of linear global camera motion method for parameter estimation in multiple view reconstruction
Technical field
The invention belongs to multi-view geometry and three-dimensional reconstruction field, the one kind being related to during a kind of multiple view is rebuild is linear Global camera motion method for parameter estimation.
Background technology
In computer vision, three-dimensional reconstruction is the hot issue of a research, and the threedimensional model reconstructed has It and is widely applied scene, three-dimensional reconstruction just plays an important role in the life of the mankind.It is based particularly on image Three-dimensional reconstruction, it is former in 3d gaming exploitation, the reconstruction in ancient records and relics traces, the designing of three-dimensional digital city, industrial machine There is very extensive application in the fields such as the design of type, therefore are computers for the research of the three-dimensional reconstruction based on image One of the important directions of vision and graphics research field.By the development of years of researches and computer technology, therein one A little problems, have obtained relatively good solution, such as the calibration of camera, the acquisition etc. of intrinsic parameter, while solving the outer parameter of camera, That is the estimation of kinematic parameter is to fail to find relatively good method always, is especially being faced with large scale scene and noise shadow In the case of sound.Currently, action reference variable is the emphasis studied in three-dimensional reconstruction.
With the continuous deepening of research, many motion parameters estimation methods have been proposed in researchers.These movement ginsengs The linear method for solving of number estimation method, method is also iteratively solved, i.e., approaching to reality value is gone by continuous iteration, there is part The method of optimization also has the method for global optimization, exactly all cameras is put together, is carried out under unification to the same coordinate system Optimization Solution, there is the method independently solved, also has and is solved together with structure, is exactly while and restoring space midfield sight spot together Carry out calculating estimation.Substantially the problem is handled as an optimization problem now.
Although these motion parameters estimation methods have their own advantages, there is also different disadvantages.It is to calculate too It is complicated, it is difficult to calculate it is complicated obtain balance between the two with precision, or be difficult to carry out or be that precision is not high, although meter It calculates simply, but the effect is unsatisfactory, then either more sensitive to noise.Also be exactly can not effectively solve it is extensive it is thousands of on The reconstruction application of hundred image scene.
Invention content
In view of this, a kind of linear global camera motion ginseng in being rebuild the purpose of the present invention is to provide a kind of multiple view Number estimation method, the deficiencies of for existing motion parameters estimation method to noise-sensitive and not high precision, line proposed by the present invention Property global parameter method of estimation has the characteristics that anti-noise and accurate, can be calculated as based on the method for multi-view geometry to sheet Stromal matrix, to overcome the influence being initialized to the error or mistake of essential matrix, and its method is linear, is calculated Compare efficiently, the method for the invention can effectively solve the problems, such as that existing motion parameters estimation method exists.
In order to achieve the above objectives, the present invention provides the following technical solutions:
A kind of linear global camera motion method for parameter estimation in multiple view reconstruction, comprises the following steps:
S1:Multiple images of fixed scene are matched two-by-two;
S2:Using based on multi-view geometry theory and global optimization method by all cameras be placed under the same coordinate system into Row optimization;
S3:Calculate the absolute rotation of all cameras;
S4:Absolute rotation using camera and point matching, calculate according to epipolar geometry constraints in multi-view geometry The absolute translational vector of camera.
Further, step S1 is specially:
S11:Multiple images of fixed scene are matched two-by-two;
S12:To being judged by matching to obtain match point two-by-two, if it is determined that it is insufficient or undesirable for interior point, then The matching double points are defined as lacking, and does not retain the matching double points or skips the image corresponding to the matching double points.
Further, step S2 specifically, using based on multi-view geometry theory and global optimization method by all cameras It is placed under the same coordinate system and optimizes, the absolute spin matrix of camera is calculated by object function,
Wherein, RiFor absolute spin matrix, orientation or direction of the camera i under global coordinate system are indicated;It is opposite Spin matrix, indicate rotation of i-th of camera relative to j-th of camera;RjIndicate orientation of the camera j under global coordinate system Or direction, RnIndicate that orientation or direction of the camera n under global coordinate system, n indicate the quantity of image, | | | |FIndicate a square The Frobenius norms of battle array.
Further, the object function calculates in the following way:
S21:All relative rotation are gathered, the symmetrical matrix G of a 3n × 3n is constructed;
S22:The quantity of the effective relative rotation matrices of each row block in statistical matrix G constructs the matrix D of 3n × 3n.
Further, the symmetrical matrix G is,
The matrix D is,
Wherein, I is unit matrix,For the relative rotation matrices of camera, indicate i-th of camera relative to j-th of camera Rotation;dkThe quantity of the effective relative rotation obtained in row k block in representing matrix G, 1≤k≤n.
Further, step S4 is specially:
S41:The translation vector of camera is calculated according to the absolute rotation of camera and match point;
S42:According to the epipolar geometry constraints in multi-view geometry, absolute translational vector is obtained in conjunction with matching double points.
Further, in step S41, the translation vector of camera is obtained according to essential matrix, and the essential matrix is,
Eij=Ri T(Ti-Tj)Rj
Wherein, Ti=[ti]×,TiIndicate tiThe skew symmetric matrix of inner product, TjIndicate tjThe skew symmetry square of inner product Battle array, tiFor the absolute translational vector of i-th of camera, the position under world coordinates, t are indicatedjIndicate that j-th of the absolute of camera is put down The amount of shifting to;
In step S42, obtains absolute translational vector in conjunction with matching double points and meet:
Wherein,For m-th of characteristic point on i-th image;For m-th of characteristic point on jth image.
The beneficial effects of the present invention are:A kind of linear global camera motion during multiple view provided by the invention is rebuild is joined All cameras, using the thinking of global optimization, are placed on same seat by number estimation method on the basis of based on multi-view geometry Consider together under mark system, calculate the matrix of two 3n × 3n, it is final it is linear obtain the kinematic parameter of camera under world coordinates, this Invention the method overcomes the influence of relative rotation initialization error, can handle the reconstruction application of large scene, while also making an uproar The influence of sound is smaller, calculates simply, does not need additional other information, greatly improve the precision of kinematic parameter.
Description of the drawings
In order to keep the purpose of the present invention, technical solution and advantageous effect clearer, the present invention provides following attached drawing and carries out Explanation:
Fig. 1 is the flow chart of the method for the present invention;
Fig. 2 is national forest park in Xiaokeng figure;
Fig. 3 is epipolar geom etry schematic diagram;
Fig. 4 is the display diagram to movement by image.
Specific implementation mode
Below in conjunction with attached drawing, the preferred embodiment of the present invention is described in detail.
The present invention provides a kind of linear global camera motion method for parameter estimation during multiple view is rebuild, feature based point With with multi-view geometry theory, matched to obtain matching double points first two-by-two from a series of image of fixed scenes, from Pairs of essential matrix is calculated in point, and essential matrix is decomposed to obtain pairs of relative rotation matrices;Then according to mesh Scalar functions carry out global optimization using these pairs of relative rotation and obtain absolute spin matrix;Then multi-view geometry is theoretical A kind of new expression for finding out essential matrix solves absolute translation in conjunction with image characteristic point to constructing linear equality system Vector, as shown in Figure 1, the method for the invention specifically includes following steps:
Step 1:A series of images is inputted, images match two-by-two is carried out by Feature Correspondence Algorithm.
A series of orderly or unordered image of fixed scene, this example are matched two-by-two using surf Feature Descriptors, Obtain match point.And obtained match point is judged, once point is insufficient or undesirable in it, is just defined as lacking It loses, no longer retains the match point or skip this two images.
Step 2:Essential matrix is calculated using match point, and based on multiple view theory by essential matrix singular value decomposition Obtain pairs of relative rotation matrices.
Step 201:Essential matrix is calculated, essential matrix is the key concept in multi-view geometry theory, outer with camera Relating to parameters, it by left cameras view to spatial point P the position of identical point arrived of physical coordinates and right cameras view It associates, as shown in Figure 2.
Step 202:Decompose essential matrix Eij, obtain pairs of relative rotation.Its essential matrix formula is:
Wherein, tijFor opposite translation vector, position of i-th of camera relative to j-th of camera is indicated;Expression pair Answer tijThe skew symmetric matrix of inner product;RijFor opposite spin matrix, rotation of i-th of camera relative to j-th of camera is indicated.
Using the principle of singular value decomposition by essential matrix Eij, spin matrix R can be broken down intoijWith translation vector tij。 But rotation more than one is obtained, is at this moment screened, screening principle is:Spatial point should be before two cameras, in Fig. 2 Shown, spatial point P should be before left and right cameras.
In addition, according to the epipolar geom etry principle in multi-view geometry, spatial point P and its projection on image i and image j Point is expressed as piAnd pj, meet epipolar geometry constraints, as shown in Figure 3.Physical relationship equation is:
The present embodiment definesIt indicates the focus of the camera under global coordinate system, defines Ri∈ SO (3) indicate complete The orientation of camera under office's coordinate system, to a pair of of image i and j, definition has following relationship:
Rij=Ri TRj
Simultaneously, it is easy to verify:
After obtaining all relative rotation matrices, so that it may to enter step 3.
Step 3:Using relative rotation, object function is established, according to Eigenvalues Decomposition, object function is solved, obtains absolutely Camera spin matrix.
By step 2 processing after, the relative rotation between all images two-by-two will be obtained, be exactly according to these relative rotation come Solve absolute rotation.
Object function is constructed according to formula (3) first, object function is as follows:
In order to solve the object function, the symmetrical matrix G of a 3n × 3n is built, including all pairs of relative rotation Matrix, formula are as follows:
The matrix R for defining a 3 × 3n simultaneously allows matrix R to include all absolute rotations, is defined as follows:
R=[R1 R2 ... Rn]
Next effective relative rotation matrices quantity constructs D matrix in each row block in statistics G matrix, and construction is public Formula is as follows:
Wherein, I is unit matrix, dkGround effective relative rotation in k row blocks in representing matrix GQuantity, 1≤k≤ n。
It thus can verify that GRT=DRT, therefore matrix D-1The feature vector that three characteristic values of G are 1 just constitutes matrix RTRow.In order to extract absolute spin matrix, the matrix M of 3n × 3 is defined, it includes these feature vectors, M is fixed Justice is:
M=[M1;M2;...Mn]
Wherein each MiAn exactly estimation that i-th of camera is absolutely rotated.
Finally utilize each M of singular value decompositioni, just obtain each absolute spin matrix Ri
Step 4:According to multi-view geometry theory, linear equality is constructed using absolute spin matrix and matching double points, is asked Solution obtains absolute camera translation vector.
The absolute rotation R of each camera for being obtained by step 3i, then start to solve the absolute translational vector of camera, it is first First find a kind of expression for the essential matrix including only rotation and translation:
Eij=Ri T(Ti-Tj)Rj
Wherein, Ti=[ti]×,
Further, according to the epipolar geometry constraints in multi-view geometry, the equation obtained in conjunction with matching double points:
Wherein,For m-th of characteristic point on i-th image,For m-th of characteristic point on i-th image.
Finally, according to least square, linear equality system is solved, by 4 minimal eigenvalues for finding 3n × 3n matrixes Feature vector, obtain the absolute translational vector t of each camerai
Thus estimation obtains the kinematic parameter of each camera, i.e., the absolute spin matrix R of each cameraiAnd absolute translational Vectorial ti, estimated by image to camera motion as shown in Figure 4.
Finally illustrate, preferred embodiment above is only to illustrate the technical solution of invention and unrestricted, although passing through Above preferred embodiment is described in detail the present invention, however, those skilled in the art should understand that, can be in shape Various changes are made in formula and to it in details, without departing from claims of the present invention limited range.

Claims (7)

1. a kind of linear global camera motion method for parameter estimation in multiple view reconstruction, it is characterised in that:It comprises the following steps:
S1:Multiple images are inputted, multiple images of fixed scene are matched two-by-two;
S2:Using based on multi-view geometry theory and global optimization method all cameras are placed under the same coordinate system carry out it is excellent Change;
S3:Calculate the absolute rotation of all cameras;
S4:Absolute rotation using camera and point matching, camera is calculated according to epipolar geometry constraints in multi-view geometry Absolute translational vector.
2. a kind of linear global camera motion method for parameter estimation in multiple view reconstruction according to claim 1, special Sign is:Step S1 is specially:
S11:Multiple images of fixed scene are matched two-by-two;
S12:To being judged by matching to obtain match point two-by-two, if it is determined that it is insufficient or undesirable for interior point, then should Matching double points are defined as lacking, and do not retain the matching double points or skip the image corresponding to the matching double points.
3. a kind of linear global camera motion method for parameter estimation in multiple view reconstruction according to claim 2, special Sign is:Step S2 specifically, using based on multi-view geometry theory and global optimization method all cameras are placed on it is same It is optimized under coordinate system, the absolute spin matrix of camera is calculated by object function,
Wherein, RiFor absolute spin matrix, orientation or direction of the camera i under global coordinate system are indicated;For opposite rotation Torque battle array indicates rotation of i-th of camera relative to j-th of camera;RjIndicate orientation or court of the camera j under global coordinate system To RnIndicate that orientation or direction of the camera n under global coordinate system, n indicate the quantity of image, | | | |FOne matrix of expression Frobenius norms.
4. a kind of linear global camera motion method for parameter estimation in multiple view reconstruction according to claim 3, special Sign is:The object function calculates in the following way:
S21:All relative rotation are gathered, the symmetrical matrix G of a 3n × 3n is constructed;
S22:The quantity of the effective relative rotation matrices of each row block in statistical matrix G constructs the matrix D of 3n × 3n.
5. a kind of linear global camera motion method for parameter estimation in multiple view reconstruction according to claim 4, special Sign is:The symmetrical matrix G is,
The matrix D is,
Wherein, I is unit matrix,For the relative rotation matrices of camera, indicate that i-th of camera is rotated relative to j-th of camera; dkThe quantity of the effective relative rotation obtained in row k block in representing matrix G, 1≤k≤n.
6. a kind of linear global camera motion method for parameter estimation in multiple view reconstruction according to claim 5, special Sign is:Step S4 is specially:
S41:The translation vector of camera is calculated according to the absolute rotation of camera and match point;
S42:According to the epipolar geometry constraints in multi-view geometry, absolute translational vector is obtained in conjunction with matching double points.
7. a kind of linear global camera motion method for parameter estimation in multiple view reconstruction according to claim 6, special Sign is:In step S41, the translation vector of camera is obtained according to essential matrix, and the essential matrix is,
Eij=Ri T(Ti-Tj)Rj
Wherein, Ti=[ti]×,TiIndicate tiThe skew symmetric matrix of inner product, TjIndicate tjThe skew symmetric matrix of inner product, tiFor the absolute translational vector of i-th of camera, the position under world coordinates, t are indicatedjIndicate the absolute translational of j-th of camera to Amount;
In step S42, obtains absolute translational vector in conjunction with matching double points and meet:
Wherein,For m-th of characteristic point on i-th image;For m-th of characteristic point on jth image.
CN201810085740.8A 2018-01-29 2018-01-29 Linear global camera motion parameter estimation method in multi-view reconstruction Active CN108280858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810085740.8A CN108280858B (en) 2018-01-29 2018-01-29 Linear global camera motion parameter estimation method in multi-view reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810085740.8A CN108280858B (en) 2018-01-29 2018-01-29 Linear global camera motion parameter estimation method in multi-view reconstruction

Publications (2)

Publication Number Publication Date
CN108280858A true CN108280858A (en) 2018-07-13
CN108280858B CN108280858B (en) 2022-02-01

Family

ID=62805627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810085740.8A Active CN108280858B (en) 2018-01-29 2018-01-29 Linear global camera motion parameter estimation method in multi-view reconstruction

Country Status (1)

Country Link
CN (1) CN108280858B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166171A (en) * 2018-08-09 2019-01-08 西北工业大学 Motion recovery structure three-dimensional reconstruction method based on global and incremental estimation
CN109741403A (en) * 2018-12-29 2019-05-10 重庆邮电大学 It is a kind of that scaling method is translated based on global linear camera
CN110782524A (en) * 2019-10-25 2020-02-11 重庆邮电大学 Indoor three-dimensional reconstruction method based on panoramic image
CN111161355A (en) * 2019-12-11 2020-05-15 上海交通大学 Pure pose resolving method and system for multi-view camera pose and scene
CN111724466A (en) * 2020-05-26 2020-09-29 同济大学 3D reconstruction optimization method and device based on rotation matrix
CN111986247A (en) * 2020-08-28 2020-11-24 中国海洋大学 Hierarchical camera rotation estimation method
CN114170296A (en) * 2021-11-10 2022-03-11 埃洛克航空科技(北京)有限公司 Rotary average estimation method and device based on multi-mode comprehensive decision
CN114972536A (en) * 2022-05-26 2022-08-30 中国人民解放军战略支援部队信息工程大学 Aviation area array sweep type camera positioning and calibrating method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985154A (en) * 2014-04-25 2014-08-13 北京大学 Three-dimensional model reestablishment method based on global linear method
CN104019799A (en) * 2014-05-23 2014-09-03 北京信息科技大学 Relative orientation method by using optimization of local parameter to calculate basis matrix
CN104200523A (en) * 2014-09-11 2014-12-10 中国科学院自动化研究所 Large-scale scene three-dimensional reconstruction method for fusion of additional information
US20170076754A1 (en) * 2015-09-11 2017-03-16 Evergig Music S.A.S.U. Systems and methods for matching two or more digital multimedia files
US9619892B2 (en) * 2013-05-13 2017-04-11 Electronics And Telecommunications Research Institute Apparatus and method for extracting movement path of mutual geometric relationship fixed camera group
CN106981083A (en) * 2017-03-22 2017-07-25 大连理工大学 The substep scaling method of Binocular Stereo Vision System camera parameters

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619892B2 (en) * 2013-05-13 2017-04-11 Electronics And Telecommunications Research Institute Apparatus and method for extracting movement path of mutual geometric relationship fixed camera group
CN103985154A (en) * 2014-04-25 2014-08-13 北京大学 Three-dimensional model reestablishment method based on global linear method
CN104019799A (en) * 2014-05-23 2014-09-03 北京信息科技大学 Relative orientation method by using optimization of local parameter to calculate basis matrix
CN104200523A (en) * 2014-09-11 2014-12-10 中国科学院自动化研究所 Large-scale scene three-dimensional reconstruction method for fusion of additional information
US20170076754A1 (en) * 2015-09-11 2017-03-16 Evergig Music S.A.S.U. Systems and methods for matching two or more digital multimedia files
CN106981083A (en) * 2017-03-22 2017-07-25 大连理工大学 The substep scaling method of Binocular Stereo Vision System camera parameters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DANIEL MARTINEC等: ""Robust Rotation and Translation Estimation in Multiview Reconstruction"", 《2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166171B (en) * 2018-08-09 2022-05-13 西北工业大学 Motion recovery structure three-dimensional reconstruction method based on global and incremental estimation
CN109166171A (en) * 2018-08-09 2019-01-08 西北工业大学 Motion recovery structure three-dimensional reconstruction method based on global and incremental estimation
CN109741403A (en) * 2018-12-29 2019-05-10 重庆邮电大学 It is a kind of that scaling method is translated based on global linear camera
CN109741403B (en) * 2018-12-29 2023-04-07 重庆邮电大学 Camera translation calibration method based on global linearity
CN110782524A (en) * 2019-10-25 2020-02-11 重庆邮电大学 Indoor three-dimensional reconstruction method based on panoramic image
CN110782524B (en) * 2019-10-25 2023-05-23 重庆邮电大学 Indoor three-dimensional reconstruction method based on panoramic image
CN111161355A (en) * 2019-12-11 2020-05-15 上海交通大学 Pure pose resolving method and system for multi-view camera pose and scene
CN111161355B (en) * 2019-12-11 2023-05-09 上海交通大学 Multi-view camera pose and scene pure pose resolving method and system
CN111724466A (en) * 2020-05-26 2020-09-29 同济大学 3D reconstruction optimization method and device based on rotation matrix
CN111724466B (en) * 2020-05-26 2023-09-26 同济大学 3D reconstruction optimization method and device based on rotation matrix
CN111986247A (en) * 2020-08-28 2020-11-24 中国海洋大学 Hierarchical camera rotation estimation method
CN111986247B (en) * 2020-08-28 2023-10-27 中国海洋大学 Hierarchical camera rotation estimation method
CN114170296A (en) * 2021-11-10 2022-03-11 埃洛克航空科技(北京)有限公司 Rotary average estimation method and device based on multi-mode comprehensive decision
CN114972536A (en) * 2022-05-26 2022-08-30 中国人民解放军战略支援部队信息工程大学 Aviation area array sweep type camera positioning and calibrating method
CN114972536B (en) * 2022-05-26 2023-05-09 中国人民解放军战略支援部队信息工程大学 Positioning and calibrating method for aviation area array swing scanning type camera

Also Published As

Publication number Publication date
CN108280858B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN108280858A (en) A kind of linear global camera motion method for parameter estimation in multiple view reconstruction
CN108921926B (en) End-to-end three-dimensional face reconstruction method based on single image
CN108509848B (en) The real-time detection method and system of three-dimension object
CN108898630B (en) Three-dimensional reconstruction method, device, equipment and storage medium
Botsch et al. Adaptive space deformations based on rigid cells
CN104376552A (en) Virtual-real registering algorithm of 3D model and two-dimensional image
CN109214282A (en) A kind of three-dimension gesture critical point detection method and system neural network based
CN110705448A (en) Human body detection method and device
CN106340036A (en) Binocular stereoscopic vision-based stereo matching method
CN109191509A (en) A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN102750704B (en) Step-by-step video camera self-calibration method
WO2018067978A1 (en) Method and apparatus for generating two-dimensional image data describing a three-dimensional image
CN110009674A (en) Monocular image depth of field real-time computing technique based on unsupervised deep learning
CN106934824B (en) Global non-rigid registration and reconstruction method for deformable object
CN103559737A (en) Object panorama modeling method
CN104794728A (en) Method for reconstructing real-time three-dimensional face data with multiple images
CN110349247A (en) A kind of indoor scene CAD 3D method for reconstructing based on semantic understanding
CN110223370A (en) A method of complete human body's texture mapping is generated from single view picture
CN110197255A (en) A kind of deformable convolutional network based on deep learning
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN108961385A (en) A kind of SLAM patterning process and device
CN106683163A (en) Imaging method and system used in video monitoring
CN108010122A (en) A kind of human 3d model rebuilds the method and system with measurement
CN109741403A (en) It is a kind of that scaling method is translated based on global linear camera
CN107330934B (en) Low-dimensional cluster adjustment calculation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240402

Address after: No. 07, Zone E, Shenxinda Zhongchuang Space, No. 2, 3rd Floor, Unit 1, Building 1, No. 252, Zhudu Avenue, Jiang'an Town, Yibin City, Sichuan Province, 644200

Patentee after: Yibin Zhibohui Technology Co.,Ltd.

Country or region after: China

Address before: 400065 Chongqing Nan'an District huangjuezhen pass Chongwen Road No. 2

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China

TR01 Transfer of patent right

Effective date of registration: 20240410

Address after: 812, No. 71 Jinyu West Street, Dalong Street, Panyu District, Guangzhou City, Guangdong Province, 511400

Patentee after: Guangzhou Jinlaojin Information Technology Co.,Ltd.

Country or region after: China

Address before: No. 07, Zone E, Shenxinda Zhongchuang Space, No. 2, 3rd Floor, Unit 1, Building 1, No. 252, Zhudu Avenue, Jiang'an Town, Yibin City, Sichuan Province, 644200

Patentee before: Yibin Zhibohui Technology Co.,Ltd.

Country or region before: China